Big Tech's Obsession With Custom Chips - 7investing 7investing
Stock Tips Mobile Menu Dropdown Icon

Big Tech’s Obsession With Custom Chips

7investing lead advisors Anirban Mahanti and Simon Erickson take a closer look at the process that goes into custom chip design and manufacturing.

September 29, 2022 – By Simon Erickson

The world’s largest tech companies are driving forward with important new projects. And they’re obsessive about designing custom chips to make them happen.

We see examples of this in products that we use every day. Amazon (Nasdaq: AMZN) has designed chips so that Alexa can understand the questions you’re asking and respond with accurate answers. Meta Platforms (Nasdaq: META) has designed chips for image recognition and to power the Metaverse. Tesla (Nasdaq: TSLA) has designed chips so its self-driving cars can understand their surroundings react to them autonomously.

These projects are massive in scope and can cost hundreds of millions of dollars to implement. Where’s the opportunity in this for investors?

In today’s 7investing podcast, 7investing lead advisors Anirban Mahanti and Simon Erickson take a closer look at the process that goes into custom chip design and manufacturing. The two look at several consumer-facing companies — including Amazon, Meta, Tesla, Apple (Nasdaq: AAPL), and Alphabet (Nasdaq: GOOGL) — and analyze the specific chip designers and manufacturers they work with to make these projects into a reality. There’s big money up for grabs, for investors who are a step ahead in deciphering which direction the industry is heading.

Anirban and Simon recorded this episode of the 7investing Podcast in front of a live audience on Monday, September 19th. If you would like to attend future live recordings of our podcast, we invite you to sign up for free on our new 7investing Events page.

Publicly-traded companies mentioned in this interview include Alphabet, Amazon, Apple, Broadcom, Intel, Meta Platforms, Qualcomm, Samsung, and Taiwan Semiconductor. 7investing’s advisors or its guests may have positions in the companies mentioned.

Unedited Transcript

Edited transcript coming soon!

Simon Erickson  00:02

Okay, welcome everyone to another live recording of our seventh investing Podcast. Today is September 21 2022. My name is Simon Erickson, I’m joined by Dr. Anirban Mahanti. We are both lead advisors here at seven investing, we do have an ongoing podcast, and everyone, we’ve been having some fun lately, we’ve been mixing these up and actually doing it in front of a live audience.

 

Anirban Mahanti  00:23

Love it. Yeah, it’s great to do it in front of a live audience unless you have a little bit of tech challenges. But this time today actually got off without any challenges, which is awesome. And it’s great to do this live. So it’s not scripted. And, you know, we can have our back and forth. And you know, it’s great. We do some prep, but it’s not that we don’t prep. We do some prep. But yeah, it’s great to do it in this live format. And we love taking questions.

 

Simon Erickson  00:49

Yeah, absolutely, it is, it is a little bit more more fun, more interesting when you have a live q&a. And you know, not quite as scripted. Like you said, As for several of our other other podcasts. We did just record a couple last week that were that we’re rolling out to our podcasts this week on SNAP, you know, Snapchat, and then also an upstart. And I think that it’s kind of neat to kind of have the back and forth conversations with investors about a lot of these companies. For those who are unfamiliar, I’ve never heard of seven investing before our site is seven investing.com. We offer seven stock market recommendations each and every month, we are long term investors were buying hold investors who are looking for the best opportunities we want to hold on to for at least at least three years. So come check us out if you do want to actually see our recommendations seven investing.com/subscribe promo code seven will get you the first month for $7. That’s a significant discount. And thank you for listening to this podcast. I nirvana. Today we’re gonna be talking about custom chips. You and I kind of really are interested in the semiconductor industry also in the big tech industry. And there’s kind of been a fascination several years now. But but it’s been big companies that are tech companies designing their own chips, right? It’s not just going to Intel, it’s not just going to, you know, Nvidia or AMD or somebody and say, Hey, what are you the guy that we can do this for? We’ve got kind of hyperscalers now we’ve got some tech companies that are growing quickly, they want to make processes as efficient as they possibly can. And that’s leaning in to actually bring a lot of design in house and other thoughts about what’s going on out there as we’re looking at some of these companies. Yeah. And

 

Anirban Mahanti  02:25

I to high level, I guess, you know, one of the things to ask is why? Why do we tech or why do companies want to design their own chips like I mean, previously in the past, right, I mean, you basically if you wanted to make a computer or just go to someone like Intel, you take the chip and then you design your computer, but I think one of the issues that will you will discover is as these chipmaking companies, they basically became duopoly is are trying to please a quad police and things like that, right? Then they they got immersed in their own, I would call it the roadmap, right? And they had their roadmap, which which then meant that those people, for example, designing, say Macintosh computers, they would have to then follow the Intel roadmap to figure out well, what chips to use? What can we do? You know, what’s the power performance trade off? And what can we enable based on those chipsets? Because ultimately, you know, that’s running the assembly language code, right? So your operating system is going to run on top of that. And there’s limitations or capability limitations, but I show up. So I would say a little bit. If people got tired of the roadmaps may be you know, we know, you have to follow somebody else’s roadmap, and people got it limited. Also, the other thing to remember is that people, when you are designing for so many people, you’re going to design for common, I guess, you know, the common good. There’s a common agenda, almost like a political, it’s like multiple political parties coming together to form a government, like that happens in minority government, many people come together, which means the agenda is really of the mixture, which means it’s a common agenda, which means that, you know, the the big tech companies wanted to design their machines based on a certain principle, couldn’t really do it that way, because they had to follow the the principle of the common or the common ground, or basically what the chipsets offer, that sort of led some companies to start thinking about, you know, specialized design, more hardware, software control, and things like that. So, yeah, I think that’s like to the stage currently that we are in.

 

Simon Erickson  04:37

I think so too. You know, I think it’s a fair statement that every company or most companies are a tech company now. They’re not only putting chips locally into the devices, they’re selling to us, so that we’re using, but they’re also building out their own data centers. And you know, we’ve talked a lot about cloud computing, training AI models, you know, a lot of this has actually been processed in neural networks that are in the data centers that are now owned by these tech companies that are learning and Training the devices that we’re buying from them. So kind of two, two levels of this, you know, I wanted to maybe set the scene a little bit about what this means and kind of how companies play a part in this ecosystem. And then we’ll look into some of the actual specific examples of tech companies that are designing their own chips and who they’re working with. But just think, again, to kind of set this this this scene or the table on this, there’s kind of three levels, there’s the companies that are providing the application or selling the product to the end consumer. At the end of the day, this is the Tesla’s This is the apples is the Microsoft’s whoever it might be. But a lot of times is they’re designing their their chips that are going into those products they’re working with, with designers, they’re working with chip designers themselves. And this was, like you said, the greatest good or the others agenda, there were companies like NVIDIA or AMD, or even Intel, that kind of made chips that were high performance and more and more efficient, you get more and more workload efficiency per per power consumed for each one of these. But then companies started, you know, kind of saying we wanted to work with with other businesses to design these from the ground up for us. And so they became a new layer of ASICs of application specific integrated circuits, companies that would design the chips for you, you could then go out and manufacture them. And the third layer, of course, is who’s actually manufacturing these at the cheapest possible cost at the highest performance you can. And so when we’re talking about this, we’re gonna be talking about the in companies that have that are selling the products of whatever they’re putting the chips into. We’re talking about the designers who are helping those companies make those chips. And then we’re also talking about the manufacturers, who are who are actually running the fabrication facilities that are actually producing the chips and delivering them to the customers. So let’s start with a fun one. Anirban, I know that you know, you are a really big fan of this company, but Apple, Apple is making a lot of its own chips that are going into smartphones and other smart devices. You follow this one for several years now, kind of what’s the story of why does Apple want to do this? And who is it working with?

 

Anirban Mahanti  06:59

Yeah, so I mean, Apple basically started making its own chips first with the iPad. So I think one of the first iPads came up with the what they call the a4 chip, then they sort of used that chip, there was a starting point. And at that point, that tube was just basically a, I think, just in CPU, not even the GPU. Or if my memory serves me, right. And then later on, that same chip then got put into, I think, the iphone four. And since then, basically, Johnny Shoji, whose leads their hardware, or they’re basically the the processor side of their business. They basically, you know, they continue making the chips and they continually improve chips for the smartphones. And and I think their rationale was that, you know, one way to stay ahead, how do you stay ahead in terms of power, efficiency, power and efficiency on that curve? How do you stay ahead of competition? Well, you could say enough competition only if you design your own chips, right? Because we use the common app, common chip that’s out there? Well, the need the performance benefits only going to come from software, right? Not not from hardware, any performance benefit that you got, and then they will not show any differences really on seventh, you run benchmarks, for example, right? So you, if you run benchmark tools, you will get the same thing as it because it’s an Intel chip, or whatever, whoever’s provided that chip. So I think that started their journey, but I think it allowed for much tighter, hardware software control. Right? And that, that philosophy, and then the ability to sort of, you know, use the latest technology that’s available to you can go smaller, has allowed him actually to go into wearables much better than any other company has, right? Basically, Apple owns the wearable industry, largely because they had the experience of designing chips on the part of smartphones, which then allowed them to design chips for the airports and now their tags and the chips for the Apple Watches, right? And that just is a substantial benefit. Because what they could do there, in the small form factor was was just phenomenal. And they’ve taken that experience and said, Well, one way to give the Mac the latest one was wanted to do the Mac, a significant leg up is to design your own chips, change the architecture. So go to the 64 bit architecture, you know, this is Arn based design. Yes, it involves changing the applications that were running on an Intel based design previously, but you know, that designed essentially an emulator on the top of the Intel based machines to allow for the, for the software to be changed. And, and even for the software has not changed, it could still run on the emulator. So there’s something called Rosetta that runs on the machine and you could still run the Intel software. And now the Mac sort of has the performance that at that class, nobody else has. Right and you know, significant battery life improvement for example, is my Mac the M one Mac you just turn it on it it just puts, there is no spinning of the wheel, it just turns on. So that is from, from a user’s point of view, from an ordinary users point of view, that’s phenomenal. But then the stuff that happened is just repeats fast. You know, there’s no lag in anything. And then, for Pro users, it means a whole lot more, because they can do so much more, whether it’s video editing, or writing, you know, compiling code and running stuff on a drive. So I think that’s the it’s the edge that you can get by bringing the hardware and software design all together. And, and no one does it better than Apple when it comes to thinking about hardware and software integration and making the experience wonderful experience. So I think that was the motivation and getting, as we as you have just eluded, it just is a fact. It’s just breaking out of that the loop or the cycle of somebody else’s development. Windows, right, you don’t have a follow that anymore, you follow you basically, I think you’re gated by, you know, the companies that actually produce these chips, that’s your gating. If you know when your chips can be produced, then that’s the gating factor.

 

Simon Erickson  11:10

It’s the perfect example of hardware and software integration, right. And Tim Cook, we’ve always pretty soon as a supply chain master, right? Like he’s integrating a lot of things. And you know, at the end of the day, you want battery power to last as long as possible. And one has many applications with the best camera that you possibly can have your smartphone, or any of the app devices. And so it makes sense. You know, Apple did it in house, they got deep enough pockets to design all of these chips. They are manufacturing them with a third party manufacturer called Taiwan Semiconductor. This is a familiar name for those that are seven investing subscribers that have heard us talk about this several times. But this is a company that has kind of pushed process technologies to the limit. Moore’s law is something we talk about all the time, if you know how many transistors can you pack as densely as possible into an integrated into into a chip that’s going to be used for a certain process. And Taiwan semi has done this better than anyone else on the globe, including its largest competitors like Intel, it like Samsung, and it’s worked with Apple as his number one customer for years now. And everybody No, this has been the manufacture of Apple’s chips, they’re dependent on each other. Apple doesn’t want to go out and spend, you know, $30 billion on each one of the cutting edge fabs that could do. It wants to say to Taiwan semi, we’ve got to the design, this is what we want our next evolution iPhone to look like Tim It’s been a great relationship for them for several years. Absolutely. Absolutely. So, so great.

 

Anirban Mahanti  12:26

And Apple, you.

 

Simon Erickson  12:30

Go ahead, I’m sorry, go ahead. Go ahead. You know, Apple has been a consumer electronics powerhouse for years, they do not run, you know, a cloud service infrastructure. But Amazon does. And Amazon Web Services has been kind of another one, that it’s been with these tech companies that has really got a use case for designing its own chips, when you’re talking to a le x A. And we all know who I’m talking about there. And I don’t want to trigger anyone who’s listened to this podcast in the background. But whatever, you know, you’ve got the smart speaker in your home that’s responding. It does it incredibly quickly. And there’s applications built upon that you’ve got to have natural language processing, so we can understand what you’re saying, You’ve got to have inference. So we can understand the words, it’s not just a command prompt all the time. And then it’s obviously got to respond to you with the correct answer. And a lot of that same kind of process is built into AWS, the same functionality for its web infrastructure than the applications that are used there. And so Amazon a couple of years back said, you know, Hey, be great if we could design our own custom chips for this and put it into our data centers that are powering these things. And it goes out into designs. influencia, actually worked with a third party designer called AI chip programming aim for the AI applications that it was working with. And then now it’s actually manufacturing them, again, with Taiwan semi for a lot of the production of those. But another use case, you know, that’s another one is Amazon that, you know, kind of one of these big tech companies said, Hey, this is an application that makes sense for us to do it in house, I’ve seen reports that it costs between 30 and $50 million to design and for INSIA drop in the bucket for a company that’s as big as Amazon is.

 

Anirban Mahanti  14:02

Yeah, I think the important thing there again, is that, you know, how do you if you want to design a hyper scalar, right? How do you get the edge, right. And, again, the hardware and the software combo can give you a substantial edge on what type of services you can offer, at what cost you can offer how efficient your data centers are. And I think that’s again, a classic case of, you know, using a deep pockets to get sort of a moat, that keeps at least you can’t keep the other hyperscalers out. But you can keep you can keep at bay, so many other people because they will be dependent on commodity design, right commodity design, maybe open source software to try to compete with Amazon. So So I think those are advantages that accrue to be big companies at this stage.

 

Simon Erickson  14:49

And so on that night. On that note, let’s talk about another hyper scalar that has leveraged open source design quite a bit and that’s alphabet, right formerly known as Google. You know, he’s got its own cloud ambitions lately. And as is you’ve pointed out so many times on our own seven investing site is growing quickly, right alongside AWS in the cloud. Ultimate is leveraging tensor flow, right, you know, it’s got its own Tensor Processing Units that it custom designed as well. This is what actually got brought in Broadcom, to do a lot of the work with them Broadcom is kind of designing ASICs, those application specific chips for a customer that wants some help with kind of the building blocks of making their own chips. But it seems like something that that alphabet has done very well, they’ve used it for the the artificial intelligence that they’re using for their own data centers, they’re using them for their own devices, you can buy Google phones out there, and various other things. But an ever Oh, my goodness, you’ve got so many products that have a billion plus users out there, I mean, efficiency has got to be the name of the game for a company like this.

 

Simon Erickson  15:53

And maybe some technical problems Anirban I hope you are still with us. But you know, alphabet, just kind of a company that again, has done it very well cloud ambitions, you can kind of see the direction that they want to go. Anything else to add on a company like this, there’s certainly have a lot of demand for their products out there. I think

 

Anirban Mahanti  16:09

alphabet, a trajectory here is very similar to what I guess Amazon has done right? That perfect use case is very similar to Amazon in many ways. And it makes sense for them to again on the design, because then they you can design custom, you know, custom hardware custom software again, yeah.

 

Simon Erickson  16:27

It’s interesting, too, because alphabet is taking things a step further and is designing quantum computing chips. Now, they’re Sycamore processor, they weren’t actually have a completely different architecture different the CPU different than a TPU. This is now a quantum computing processing chip that they’re starting to play with right now, they might open something like that, up to commercial business in the in the near future here, similar to like what they’re doing with a lot of the applications in Google Cloud Platform. But again, kind of pushing the limits, deep pocketed tech company, doing some cool design work for chips. Got to applaud alphabet for that. Let’s talk about another one. In Nirvana, you’re a big fan of Tesla. Tesla is incorporating chips not only in its in its cars, but also the data centers that it’s using to train a lot of those vehicles. What can we what can we talk about with Tesla, I know this is one that you’re a big fan of?

 

Anirban Mahanti  17:14

Yeah, so I think it’s interesting because sort of, if you think about what they’re trying to do, so in the car, for example, they have custom chips, that, you know, what they call the FSD computer. So basically, the vision system sees stuff, that whatever it sees, needs to be processed, you know, on the logic and issue process quickly, right, so that these two, two chips, custom chips on the, you know, one of the basically all bacteria on your car that basically processes what it sees, and then makes decisions, right. So again, it’s a classic case of a very specialized problem, right? It’s not a general problem that they’re solving, you’re solving a specific use case, and therefore having a chip that, you know, satisfies their power, power requirements, satisfies the computer requirements satisfies their, you know, latency requirements, all of those things are important. And you know, that she was not as advanced, for example, the one that’s in terms of at least process topology, if I remember correctly, it’s like a 14 nanometer design. And it’s probably like three years old. Now, when they were using 14 nanometer, I think Apple was using a seven nanometer design at that time. So it tells you that, you know, they went for the cheaper, fat option there. And I think Samsung was the was the designer. But there’s nothing that says, you know, the way it’s designed, is you can take out the chip, because it just basically is like a small little thing that just is going to plug in into the board. So you can take it out, and you can swap it for another one. Because before hardware with this is called hardware three, before Harvard, three, they had another, you know, design or another chip that they were using just called hardware to and then there was hardware 2.5, I’m very sure that some point is gonna be hard before and hardware five. And it makes sense, because on the one hand, as you just use, you noted, they have the neural network training that they do on a computer called dojo, that’s, again, internally designed with internal chips on it, right, so you designing the app, the algorithm has been created, there are the training of the algorithm, but then the algorithms output basically runs locally on the computer on the on the machine here on the car. So eventually, as the algorithms complexity increases, it’s not a run on this machine and the machine in the car you need a more performant machine in the car before you need to upgrade. So that’s you know, and again here it makes perfect sense because you can control the design cycle the IP if the hardware is the limitation then you can change the hardware very easily and you can design the hardware to work as required for your application. Instead of depending for example on says a solution from a video went up when when Tesla first released its ship it be one of the comparisons They did was with invidious best in class chip that was being used for self driving applications. And the claims in terms of teraflops, for example, that the their machine was significantly better than invidious, right. And it really had to write a blog saying, Oh, we appreciate what they’re doing, and ours has bent this way. But again, it’s a validation saying that look, this is very good. But now they’re sort of decoupled themselves from the NVIDIA Luke, which is what a lot of these other companies would be using, you know, there’ll be depending on it really isn’t, there’s nothing wrong with that, I think the the manufacturer of the system at scale, hat can afford to design it, not everyone can, so you need it, you need a designer, who’s going to take care of the others, right, which in this case, it looks like it’s going to be really

 

Simon Erickson  20:47

teraflops, trillions of floating operations per second, that’s a fun dinner conversation term to throw out there, which is obviously something you’re looking at when you’re building neural networks. I love the description of it and your bond, you know, and again, for Tesla’s cars, this is something that they went out, and they just said, we’re going to put autopilot out on the roads as much as possible. And we’re going to train these cars with millions of miles of actual driving data. And it’s going to be video, right? It’s not going to be these fence geo locations where you can drive around for 20 minutes in a small loop that’s, you know, one mile around the park, where you where you’re collecting the data, they said, We’re going to put them out there on the road. So we’ll train these vehicles with the neural networks. And that was something that Elon did very well, you know, because like you said, there’s two layers of abstraction from this, you’ve got the vehicle that’s got the chips that’s collecting all the data. And then you’ve got also the neural network that’s been trained to say, Okay, that’s a deer that’s running out in front of you, you need to stop for that hour, a stop sign or whatever else it is. And so Tesla has gotten a real, a real jumpstart over over the way most of the world and the other self driving programs just because it was collecting more data, training external networks. And of course, the custom hardware that it designed itself was a key factor for that as well. I’m glad you pointed out to their manufacturing with Samsung, that’s a little different than some of the other companies we’ve talked about. It’s one of the testers one here recently, and a big one for Samsung, they have a company like that, like Facebook, quickly.

 

Anirban Mahanti  22:10

Taiwan semis, of course, probably the leading edge designer, in this space, a leading manufacturer with the design processes, I mean, and the Samsung is an interesting one, because Samsung was the one that Apple actually started with. And then Apple, you know, switches between and in between using Samsung and Taiwan semi. And in this case, Tesla, for example, went first with with, with Samsung. So Samsung, as you said, it’s one of the big three or four in the space in terms of design hence worth remembering what Samsung semiconductors are doing.

 

Simon Erickson  22:42

It’s been interesting, it’s been a it’s certainly a fight for the most lucrative customers out there Tesla included. One of the company that’s got big ambitions for making its own chips is is meta. Again, change name change now, you know, used to be known as formerly known as Facebook, which of course initially kind of wanted to do these things for for facial recognition, and the thing is for the site itself, but it’s got even bigger ambitions. Now with the metaverse. Zuckerberg has revealed that he’s building data centers that are cutting edge, he’s putting invidious chips in there too. But Facebook’s is only a lot of their own custom hardware too. I think that the metaverse is going to be very computationally heavy. We have seen Facebook working with Qualcomm for the design of a lot of the chips that are going into the headsets themselves that can locally for the metaverse, those are the virtual reality headsets that you have you put on there now wireless instead of plugging into a computer for the processing, but then also again, you know, you’ve got to kind of got to have the data centers and the supporting infrastructure for all of the processing that’s that’s going along with this. And any thoughts on Zuckerberg and Metaverse and Agios is something that they’re spending 10s of billions of dollars on a neuron it seems like it’s perpetually a couple of years out, but there’s no denying that they put the work in upfront to design and spend very heavily on making this making this happen.

 

Anirban Mahanti  24:03

This could become a separate podcast altogether.

 

Simon Erickson  24:09

Grab your beer or coffee here we go. View

 

Anirban Mahanti  24:14

on on metaphors. So Zuckerberg situation, I think at a high level is you’re damned if you do and you’re damned if you don’t. And by that what I mean is their expertise at sort of designing hardware and software is sketchy to poor in my opinion, because I mean, their hardware till date whatever they have designed the Facebook phone the the spying camera or whatever they have speaker all of them actually royal flops. Right. I mean, those, you know, they’re just as big flop as an Amazon Fire Phone. And so I don’t think they have anything that they can write home as success in the hardware front. For a company like that. to then think that they can actually design hardware that’s going to be super successful is actually quite both. But what do you do if you don’t, because otherwise, you’re going to be, you know, ruled over by what the other OSS OS platforms are doing. So I think that’s where their situation so they kind of are in that boat, I think it’s smart for them to focus on designing the chips for the hardware or for the data centers. But here’s, here’s an interesting thing. If you get in this, I get why Amazon designs chips, because they actually have a service called AWS, and a lot of stuff requires that cheap performance and cutting edge design, I get why alphabet does it. I don’t get why. Now, you know, as a comparison point, my favorite company in terms of you know, being spendthrift, in terms of how you actually deploy capitals is Apple, right? Apple does not design chips, for the, for the data centers, because you can use what Amazon and Google provide, and that’s what they use. So it is not clear to me that Facebook couldn’t get away by design, you know, by leveraging AWS or our alphabets, infrastructure, even Microsoft’s infrastructure. So a lot of their bet, I think is far out. And it’s, it’s, it’s a bet that oh, we will get something out of this. So they want to have the design experience, I the way I look at it, they want to have a design experience. Therefore, that design experience can then feed into some of the products which therefore may allow them to control the metaphors. And you know, they look at Apple as their biggest enemy when I say, finally look at Apple as your biggest enemy because they’re giving you a gut punch of because they ATT But you’re forgetting that alphabet is out there. And alpha controls Android and what what makes you think that alphabet is going to let someone else you know, have the underlying operating system for running in metaphors as an example If metamorphosis ever to realize into anything. So yeah, in many ways, I outsell draw comparisons for Facebook’s spending, a lot of what they’re doing is very similar to what Snapchat is doing. The only differences this Facebook has huge amounts of free cash flow that it can actually deploy. Whether that money burns or the money actually delivers the return on the investment we don’t know. But it’s it seems that the strategy is very similar. You know, you’re basically throwing a few darts out in the wild and maybe one of them hits the bullet.

 

Simon Erickson  27:26

And to be fair, Zuckerberg has a lot riding on Metaverse, right, he renamed the entire company to show the priority that he’s putting on this. I would say that the vast majority of people I think this is a fair statement to make are not in the metaverse today, they don’t have virtual reality headsets that you put it on and come home. This is something that again, Zuckerberg may know loves to be the visionary loves to do things that he’s saying will be very, very common and very, very popular three to five years out. But again, it’s a huge bed. You can’t mess this up if the latency is still bad for the headsets, or it’s a bad experience in the metaverse. They don’t have the privacy controls all these kinds of things that are huge factors that are going to take years to figure out and the experience for the consumer is bad. It’s it’s a lot on the line for a company that’s as large as meta, you got to applaud his ambitions. But at the same point, you know, you’ve got to make sure that they’re going to do this right. And that goes for as we just mentioned, you know that not only the headset and the local processing, that’s going to be ridiculously computationally heavy, but also for the backend infrastructure, as well. So like you said, hey, maybe we should have a follow up podcast on just met and and Facebook, because there’s a lot going on. It’s not just a website, where you’re sharing your interests anymore.

 

Anirban Mahanti  28:38

That’s right. That’s right. Yeah, I agree. That’s maybe a forerunner for.

 

Simon Erickson  28:44

And so great. So you know, here we are. We kind of highlighted a couple of companies, a couple of big tech companies. I want to be respectful of time. We’re coming up on there 30 minutes past the hour. But we chatted about Amazon. We chatted about meta, we chatted about Tesla, we chatted about Apple, and we chatted about alphabet. All of these are tech companies. But that has been more and more important for the computation of whatever it is those tech companies are wanting to do. And so when Aaron and I, you know, we both have all these kind of fun conversations, and I always enjoy chatting about him about the semiconductor pieces. It’s a hardware piece of this. That’s like how does that figure into the bigger picture? Who are they working with for the design of these chips? Who are they working with to manufacture these chips, and we hope that we shed a little bit of insight on on how that looks, and how that could be interesting for all of us as individual industries as well. And everyone I always enjoy these conversations with you. Thanks for being part of the live broadcast here this afternoon. Thank you for having me. And thanks everyone, for our live audience. Appreciate the people that showed up and were here with us live once again if you’re listening to our podcast on our other channels, seven is the promo code to use the number seven at seven investing.com/subscribe If you would like to come check out our recommendations every month Nirvan this this previous month went with a company that we all know quite well. But my good This we’re getting a fantastic valuation on it right now. So come check out our stock recommendations seven investing.com/subscribe. Again, seven is the promo code for that. That wraps up today’s podcast. My name is Simon Erickson. We are seven investing and we are here to empower you to invest in your future. We hope you have a wonderful day

Recent Episodes

Long-Term Investing Ideas in a Volatile Market

Simon recently spoke with a $35 billion global asset manager about how they're navigating the market volatility. The key takeaways are to think long term, tune out the noise...

Wreck or Rebound – Part 3! With Anirban Mahanti, Matt Cochrane...

Anirban and Matthew were joined by Alex Morris, creator of the TSOH Investment Research Service, to look at seven former market darlings that have taken severe dives from...

No Limit with Krzysztof and Luke – Episode 5

On episode 5 of No Limit, Krzysztof won’t let politics stand in the way of a good discussion - among many other topics!