Investing in Containers with Anand Khatri (Part 1 of 2) - 7investing 7investing
Stock Tips Mobile Menu Dropdown Icon

Investing in Containers with Anand Khatri (Part 1 of 2)

Containers are one of the tech world's most intriguing recent innovations. In Part 1 of our exclusive interview, DevOps expert Anand Khatri explains what containers are and why they matter.

March 9, 2021 – By Simon Erickson

The software world is undergoing a fundamental change.

There is a trend brewing of using “containers” for new software development. Technically, containers offer a way to build and isolate the code and dependencies of individual software components, in a way that doesn’t interfere with the rest of a larger application. Bigger-picture, this allows apps to be built more quickly, more reliably, and more scalably.

Innovative companies like Netflix (Nasdaq: NFLX) were early-adopters of containers. They used them to build out a robust ecosystem that could continually be updated and improved. And while cutting-edge software developers have embraced containers, overall mass-market adoption still remains very low.

That could be a huge opportunity for investors.

In this two-part 7investing exclusive interview series, we chat with DevOps expert Anand Khatri about containers. Here in Part 1, Anand explains what containers are, how they’re different from traditional approaches, and what pain-points they are solving. Simon and Anand also describe microservices, orchestration systems, and monitoring agents — and what roles these play in the biggest picture.

This podcast lays the groundwork for our upcoming Part 2 (which will publish on Thursday), Anand describes several publicly-traded companies that could be excellent investments in this fast-growing trend.

Publicly-traded companies mentioned in this interview include Alphabet, Amazon, Datadog, Microsoft, Netflix, and VMware. 7investing’s advisors or its guests may have positions in the companies mentioned.

[su_button url="/subscribe/" style="flat" background="#84c136" color="#ffffff" size="6" center="yes" radius="0" icon="" icon_color="#ffffff" desc="Get full access to our 7 best ideas in the stock market for only $49 a month."]Sign Up Today! [/su_button]

Timestamps

00:00 – Introduction to containers

09:29 – The main problems containers are solving

13:24 – Kubernetes as an orchestration platform

21:51 – Challenges facing containers

27:43 – Mass-market adoption of containers

 

Complete Transcript

Simon Erickson  00:00

Hello everyone and welcome to this edition of our 7investing podcast. I’m 7investing founder and CEO, Simon Erickson.

Simon Erickson  00:07

Containers are taking over the software world right now. They’re providing a new way for developers to more quickly build, test, and deploy new cloud based software applications.

Simon Erickson  00:18

But what are containers? Why are they becoming so popular? And what could they mean for us as investors?

Simon Erickson  00:24

Well, I’m very excited to answer some of those questions today with my guest Anand Khatri, who’s a tech lead in a DevOps company and also an expert in containers. So I’m looking forward to exploring this topic a little bit more with him. Anand, thanks for joining the 7investing podcast today!

Anand Khatri  00:39

Hi Simon. I’m glad to be here and excited to talk more about containers.

Simon Erickson  00:46

Perfect. Well, you came very well prepared on and you actually have a presentation that I’d like to spot up for anyone who’s watching the video of this. Let’s jump into that, and we’ll have kind of a back and forth conversation. But the first question I have, which is perfectly fit for your first slide, is what are containers?

Anand Khatri  01:03

Yeah, sure. Let me share my presentation. Are you able to see my presentation? Absolutely. Yes. So what are containers? So a container is a standard unit of a software that packages up the code and all its dependencies. So the applications can run quickly and reliably, from one computing environment to another. And that’s what containers are for. And that’s where the containers are helpful.

Anand Khatri  01:38

In containers, there are two major Container Engines that support the containerization. One of them is Docker. And second one is Rocket. Rocket is formed from a core OS. And in this slide you will see how the containerization is its own unit. The whole unit is packed inside. There is this one container that needs everything to run the application.

Anand Khatri  02:09

And initially, containers took virtualization to the next level. Initially, you know, virtual machine was the was the latest into the virtualization work. But what your virtual machine was more of the physical layer of the virtualization, whereas the container is more of the software layer of the virtualization. And that’s what it powers up. It helps in general.

Anand Khatri  02:40

Now, to explain in very layman’s terms, right? In general, what the container is used for and the best use is the container is used for separating into the one space and to ship it in a faster, reliable, and scalable manner. That is the exact fundamental that containerization is doing in the software world. It’s easy to separate, easy to push faster, and move faster to the production environment and deploy the code test faster and release cycles become very individual and more frequent.

Anand Khatri  03:31

That says that Docker containers, do they run any operating systems? Today’s answer is yes. Docker containers run on all operating systems. Linux, Windows…in your data centers,on your cloud. Even in a serverless architecture, containers can run easily. Containers and virtual machines have the similar resource isolation and allocation, but the functionality is different because containers’ virtualization is at the operating system level and not the hardware level. And that is more portable and efficient for applications that run on containers. Containers and virtual machines, which we can see in the entire diagram, let’s say if we have the one physical machine, right? And if you have to run multiple containers into the physical machine and virtual machine, you can run multiple applications or microservices into the one VM. What about in the virtual machine? Well for each application, you have to have the similar settings and codes or the libraries that you are running and those are the code that can run together.

Anand Khatri  05:01

We will discuss more into the next slide about the benefits of containers. And also how containers have now become the industry standard as well. And one of the industry standards is an open container initiative, which known as OCI. And that helps the build to standardization into the container space. Also, now Docker is continuously contributing into the common project called Cloud Native Computing Foundation. Shortform is CNCF, in 2017. And Container D is the industry standard for container runtime. And that’s how the container runtime applications runs. It helps pushing applications faster to production.

Simon Erickson  05:57

Yeah, this makes a lot of sense Anand. So we’ve always heard that, if you’re a software developer, you’ve already got your environment you want to work in, right? You want to work in either Windows or you want to work in Linux, maybe if you’re more open source. Or you want to work with Mac because you’re familiar with it. But it was hard, right? It was hard to get all of that on the same page.

Simon Erickson  06:16

Containers, it sounds like, are offering a lot of more freedom to develop things in the way that you want to. Without the interdependencies. Without screwing everything else up. Is that a fair assessment of where we’re going with this?

Anand Khatri  06:27

Yes, exactly. It’s really easy and flexible for the developers. Because essentially, developers don’t need to worry about whether my application is running on Linux, Windows, or Mac. As long as they are containerized and the operating system supports the container decision. They are good to go.

Simon Erickson  06:46

And microservices. You mentioned that too. I immediately think of Netflix when we when we hear microservices. And we’ll talk about the companies later on. I don’t want to jump the gun too much on this. But what is what is the impact?

Simon Erickson  06:57

Maybe this even brings me into the next question, of how is this changing the way that software is developed? Or what are the problems that this is solving, now that you can have more individual efforts?

Anand Khatri 07:09

So microservices are helping in many ways. And yes, you are right, because Netflix was an early innovator and adopter in terms of the microservices. But then into the software role, it become the common ground for using the microservices. So what happens in microservices is your applications is divided into small, logical components. So for example, your website or your web application or mobile application has a signup process, then there will be authentication, that will be completely separate microservice authentication. And authorization, that micro service takes care only about authentication. And then your other web application or web portal has some other functionality that will be completely separate. Then if you have the session timeout or logout, that will be completely separate microservices. The benefit of the microservices is let’s say, if I am you I have multiple teams working on the same applications. If one microservice authentication, microservices make some changes, it will not impact any other microservices into the applications. If something goes wrong, only that microservice will be impacted. Not the entire application. And that is the benefit of it, plus it will become more individualization and more independent. So since it’s a more granular and independent unit, you can work individually and push faster and more frequent updates into the applications. And that helps building new features and functionality on your mobile application or web applications faster.

Simon Erickson  09:07

Perfect. So each one of the microservices can kind of be built independently rather than there being a weakest link that’s holding down the entire application at the end of the day.

Anand Khatri  09:15

It’s like a whole, like a mumbo jumbo burrito kind of legacy application. Where every record based sitting into the one package is now not anymore in a microservice world.

Simon Erickson  09:29

Gotcha. Gotcha. Okay. Now the next topic that you wanted to talk about was, what are the problems that containers are truly solving out there?

Anand Khatri  09:36

Yes, correct. And so what are the problems that containers really solve and why industries taking taking this and adopting it faster than normal? Right. So, one of the main important problems that container Docker is solving is compatibility of each services online. Libraries and dependencies. So for an example, you are one of the microservices running on Java 10 and one microservice is running on Java 11. How would you run those applications into the same VM? There’s no way. For those two microservices, you need a separate VM. Over here. If your application is Dockerized, you can take the Docker image of Java 10 and build the application and build the Docker image and run the container. And you can take the Java 11, version with the Microsoft build your code base on top of it, and run those containers into Docker.

Anand Khatri  10:37

And both the containers can run into the same virtual machine. You do not need additional virtual machines. That helps minimizing the cost of the virtual machines and compute. Especially in the cloud world. If we go into the compute, it will minimize the cost of the compute and also gives the flexibility to run the applications.

Anand Khatri  10:58

Another major important problem that Docker is or containers are solving is each service has managed its required OS dependence itself. And bundled and isolated in its own container. So if one container is vulnerable, for example, it will not impact another container or subsequent containers into the same VM. That problem exists only into that container. So that helps into the isolation, but also out into the separation as well. At the same time, another major important problem that it’s showing is compatibility of each service and library’s independency of the OS. So if one library or one service requires a certain version of the OS, it can have that OS’ different version, and another container can have a different version of the OS. They are all independent, they are not in dependency.

Anand Khatri  12:05

But if it’s in a dependency, let’s say, if you upgrade the micro Operating System version, if you impact to the another one app, it solved the problem for one application but creates a problem for another application. It will not be helpful. It will not help the developer to move faster or the organization to deploy the features and functionality faster. But because of this, this, you can do the upgrade independently and also change the component without affecting other services. If you are releasing one microservice or one service or one library, it will not impact the other microservices or other libraries to your applications.

Anand Khatri  12:51

That is the one biggest advantage. And also the change underlying OS without affecting any of the services. Because since the OS is also separate – its library’s OS – it will not impact or affect any of the other microservices in your application at the same time.

Simon Erickson  13:10

Yeah, it sounds like it’s improving a lot of the compatibility issues that were that were legacy withstanding on it. I got a sneak peek, as you showed the graphic on the next slide here. I was wondering if you could pull that up to just kind of connect everything together.

Simon Erickson  13:24

I know that Kubernetes is another term that’s being used a lot right now. It’s the orchestration system itself for containers. Can you tell us a little bit about how all this works together?

Anand Khatri  13:32

Yeah, sure. So before I go ahead and explain this into the diagram, what I would like to say is, if your application runs in a container, running a container is not easy. Because containers can die. Containers can terminate. It’s really a headache and a tedious task for the engineers and developers keep the containers up and running all the time without affecting the applications.

Anand Khatri  14:02

And that’s where the container orchestration tools are very important. And one of the one of the most adopted container orchestration tool is Kubernetes. And Kubernetes, originally it’s an open source project. Inside Google, they are supporting the open source project of the Kubernetes. And Kubernetes was running in Google since 2008. Their internal system called Borg. It was their internal system in the communities that was built on top of the Kubernetes. And the good thing about some of the advantages of the Kubernetes is the how the self healing capabilities. So if your application runs in the containers and let’s say one of the container dies, then what will happen is … it is well known that when one of the container dies, it automatically spins up the container by itself. You don’t even have to log in or do anything. The engineer doesn’t even need to know. It will automatically spin up, easy to scale up scale down. That also helps.

Anand Khatri  15:21

Let’s say on your certain days, on certain days or time of the year or month, you have more traffic on your website. You don’t have to worry about, “oh, I need to put more hardware into it or how can I scale it up.” It easily scales up. You just have to mention where you deploy the application in deployment, you just have to mention your min replica and max replica. What is the minimum replica and what is the maximum replica? So in the event of any traffic changes, it adjusts by itself. And that is a very powerful concept.

Anand Khatri  15:57

And that also helps to reducing or adjusting the cost at the same time. Because when there is more traffic for high demand websites, you want to spend more because you want to sustain the traffic. You want those customers on your portal. You don’t want to let them go away. So you are okay to spend a little bit more money. So you can adjust that cost easily. Because as you scale up, it will cost a little bit more, but you will get more traffic, right? And as soon as that time goes down, it automatically scales down to the whatever the min replica is on your deployment or file. And all those are all configurable, easy to deploy. That is what makes the communities powerful.

Anand Khatri  16:44

And now let’s let me explain this diagram so that we know what are the components involved in Kubernetes. There are two major things: Kubernetes Master and Kubernetes Worker Nodes. And whenever you see the greatest managed service in the world of Google Cloud, it’s a Gk in the world of AWS it’s Eks, and in the world of Azure, it’s Aks. Those are all managed Kubernetes managed to service read. All the greatest master nodes are under their control and worker nodes are on to the users control or their customers control. And in the very overview of the greatest cluster, you see there is a deployment. That’s where you define your deployment configurations into it. And based on the deployment, a deployment configuration, it will deploy your applications and containers and these are the ports. This goes for containers and sidecar containers and attached volume. Volume is for basically for storage attached to it. And on top of it, this is basically becoming one building block of one of my microservices. When I say service, my Microsoft service or libraries and it has an internal IP address and this is the service number two. So communities also provide the internal communication from service one to service two. For service one to communicate service to they don’t need to go to the internet and communicate it out. They can directly communicate internally and that that makes the communication a little bit secure, because if you create the private communities cluster.

Anand Khatri  18:31

And if you keep this communication private, this communication becomes more secure, then it goes out to the internet and connect to the service. And if let’s say if you have exposed this service to the internet, you can put the external URL which is called Ingress in the current world. And that Ingress is accessible to the outside world or integrated into the applications where applications use this for their compute purposes or for that microservice purposes. And this Ingress will be accessible. Now in this one, in terms of the security layer, there is a lot of security that will be placed into all these; in place not be directly accessible. You have to go through the API gateways. And those API gateways are secure API gateways that you have to use. Also, the communication should be the TLS transport layer communication and that is a secure communication as well.

Anand Khatri  18:41

Also, you can put a lot of identity and access management over here and Ingress to see the what is the traffic. You can diagnose that traffic in, if you see if you recognize any on you one activity on your applications you can directly get over here. At the first line of defense also, it comes firewall on top of it. And that makes your applications secure running under the greatest cluster and as a container.

Anand Khatri  20:09

And there are some of the containers which require a sidecar container. Say for an example, my microservices need to access to the My Secret management, how they can see you? What you can do is you can put an agent. Or let’s say, for an example, my microservice needs to be monitored. And I need to install the DataDog agent into it. You can put that into the sidecar container and you can put that agent into this container. And that agent will continuously monitor and send the data back to your DataDog dashboard. And you can monitor those microservices very closely and effectively.

Simon Erickson  20:48

Yeah, I love this graphic.

Simon Erickson  20:49

I mean, so starting at the bottom and working our way up on it, we’ve got individual containers. We’ve got kernels or nodes, things that are at the individual level. But they’re not always working well, right? Sometimes the code fails to execute. They have problems within. They terminate over a short period of time.

Simon Erickson  21:07

You don’t want to have that public facing or impacting the entire application. And so like you mentioned, you’ve got agents that are constantly monitoring for anything that seems out of place. If something breaks, you’ve got kind of an orchestration system that says, “Okay, hold on, don’t take everything down for me.”

Simon Erickson  21:22

And then you’ve got ways that you can scale those up or down based on the traffic requirements and spikes that you’re hitting your website with. But then over time, we’re individually optimizing each one of these nodes, right? Each one of these containers, we’re building out an entire software application. So that when it’s ready, we can now have a public facing application. Is that a fair assessment of what’s going on in this graphic?

Anand Khatri  21:43

Yes, that is exactly a fair assessment and a correct understanding about how the container works.

Simon Erickson  21:51

Could you talk a little bit about the challenges of using containers and how those are being addressed as well? This sounds very innovative and a more optimal way to build things at a microservices level. Are there things that still are pain points that are being worked on right now?

Anand Khatri  22:06

Yes, the initial version of the pain points was the lack of the container orchestration to run containers onto the bare metal server. Of course, running a container in bare metal server is fast, but very difficult to manage. And the container orchestration tools makes life easier for the engineers, DevOps engineers, developers SRE Site Reliability Engineers and support engineers.

Simon Erickson  22:44

And by bare metal, you mean your own dedicated servers. Things that are working directly for your own applications? Correct?

Simon Erickson  22:49

Correct. And now a lot of those pain points are solved by the container orchestration tool. But it doesn’t mean that now it’s all nice and in the glory days, right? There are still some challenges into the containers. Say for an example, one of the things that is challenging is what if you have to run the database into the container? If someone asks, “Can you run the database into the container?” No. As of now, there is not a way to run the database into the containers. Because databases require the more transactional basis and backup recovery options and those kind of functionality.

Anand Khatri  23:42

Regardless, it’s a no SQL database or SQL database. But they need all this functionality. And if your application’s into those containers, you cannot do that. That is one of the the challenges that software engineers has. They cannot run the database in a container. Though, if it comes in the future, I will not be surprised by adding more and more functionality and features into the containers.

Anand Khatri  24:13

Another thing that is been a pain point is that if you bring the container orchestration tool into the picture, what if your entire master node or master architecture of the container orchestration goes down? It immediately goes down all your applications. So that is also a risk to manage. To manage that master of the container orchestration tool. And in that is the reason that all these managed services are coming along as an offering into the cloud. Is because it’s not easy to run those community clusters by themselves into the background. You need that level of expertise and knowledge in house. And not all the companies can do that.

Anand Khatri  25:11

And that is the reason Google, Amazon, AWS, Microsoft Azure, they have that expertise in house. They are managing the master. They have made sure that the master nodes can keep up and running all the time. And that is the reason they are also providing the SLA and SLI, service level agreements. Those are the things they are providing. Three nines, four nines SLA they are providing [this refers to uptime 99.9999% of the time]. And that become more believable. And that’s where the companies like smaller companies or medium sized companies are starting to adopt it. Because they don’t have to manage the master.

Anand Khatri  25:51

So most of the headache goes away. What they can do is they can make your application, make the application containerized, and just push it to the production and into the Kubernetes cluster. Also another thing is the certain type of the metrics is also very important to get. And companies, especially the companies who are into the very strict regulations, they need an additional level of the metrics. So to get those metrics, also, sometimes it becomes challenging.

Anand Khatri  26:30

And those are the challenges that still exist. But day by day, it’s getting bigger and better, I would say every day. On top of that, one of the things that on top of communities is coming is the Service Mesh. So you are seeing over here, it’s a response service to right. So on top of that, we can put the Service Mesh, which is a mesh layer, and you can combine and you can run the service to any cloud. Doesn’t matter even if it’s a private cloud or public cloud. You can run it anywhere and seamlessly without even impacting, and a customer would not even know where your services run.

Anand Khatri  27:16

Those are the level of research and efforts are going on currently. And I’m super excited about those efforts, because that makes engineers life very easy. But the adoption of those things are still very rare, a minimum at the industry. But over time, it will get bigger. The adoption of those tools and technologies goes high.

Simon Erickson  27:43

Can you talk more about that? Where do we stand in terms of adoption, of how many organizations are using containers?

Anand Khatri  27:50

So, you know, cloud market share or market share market share is currently standing between 4% to 5% of adoption. Container adoption is less than zero point 5% currently. Into all industries; not just one industry. This is the overall industry target. So you can see the how big this impact can be. How much bigger it will go and it will get bigger. Along with it, it will get better. Because as as soon as it gets better and better, people will start adopting it. It will become very easy to adopt and run it on the production environment. And that’s where that’s where the container adoption will become more powerful over time.

Anand Khatri  28:45

We are still in the early days of the containers. If I can tell by cloud. We are in a stage where cloud was in 2012 – 2013 timeframe. So that’s where we are currently at in container’s adoption. We are that early in the game.

Simon Erickson  29:10

Well cloud computing certainly did rather well for for investors over the next eight years. If we’re in the early days of containers, just like cloud was back in 2013, that is something we should be paying attention to!

Recent Episodes

Long-Term Investing Ideas in a Volatile Market

Simon recently spoke with a $35 billion global asset manager about how they're navigating the market volatility. The key takeaways are to think long term, tune out the noise...

Wreck or Rebound – Part 3! With Anirban Mahanti, Matt Cochrane...

Anirban and Matthew were joined by Alex Morris, creator of the TSOH Investment Research Service, to look at seven former market darlings that have taken severe dives from...

No Limit with Krzysztof and Luke – Episode 5

On episode 5 of No Limit, Krzysztof won’t let politics stand in the way of a good discussion - among many other topics!