Investing in the AI Revolution with GigaOm CTO Howard Holton (Part 1) - 7investing 7investing
Stock Tips Mobile Menu Dropdown Icon

Investing in the AI Revolution with GigaOm CTO Howard Holton (Part 1)

How should you invest in artificial intelligence? 7investing CEO Simon Erickson speaks with GigaOm CTO Howard Holton about several of the cutting-edge innovations underway and the most lucrative opportunities for investors.

April 25, 2023 – By Simon Erickson

The tech world moves fast, and no one wants to get left behind.

Emerging technologies like generative AI, large language models, and open-source platforms have the potential to completely transform individual businesses or even entire industries. Those who embrace them will profit, while those who don’t will become irrelevant.

Yet a “Hype Cycle” also tends to accommodate new technologies. Several new movements in the tech world that were believed to be the Next Big Thing turned out not to be. 3D printing and NFTs are recent examples.

How should forward-thinking and growth-minded investors separate out the game-changers from the flashes-in-the-pan? What new technologies are actually gaining momentum, and which will never live up to their expectations?

To answer those questions, we’ve brought in an expert. 7investing CEO Simon Erickson recently spoke with Howard Holton, the Chief Technology Officer of GigaOm. GigaOm brings the decision-making executives of progressive companies up-to-speed about emerging technologies and then helping to implement them across their organizations. (You can also  see last year’s conversation with GigaOm CEO Ben Book here.)

In Part 1 of their conversation, Simon and Howard first addressed the status quo of generative AI. AI is being used for ‘fun’ things today — like creating lifelike images through MidJourney — but even this requires significant computing power. Howard explains that innovative companies are already deploying AI at scale, but that they need appropriate data strategies and governance policies in order to maximize their success rate. This is similarly true for the flood of recent large language models; those that endure will require filters to curate the noisy flood of data from all across the internet is a way that is actually usable and trustable for businesses. One key advantage of AI over human beings is that it does not have the same biases as humans.

The two then turned their sights on hardware, specifically the custom silicon being designed by hyperscalers like Amazon, Meta Platforms, and Microsoft. Chipmakers like AMD and NVIDIA will still have an endless runway of future demand, though niche applications will also continue to be served by customizable chips like FPGAs.

In Part 2 (which we will publish on Thursday, April 27th), the cloud’s Infrastructure-as-a-Service providers like Amazon Web Services, Google Cloud Platform, and Microsoft Azure are finding that cloud computing is becoming more commoditized. Each of the Cloud Titans is looking to create a platform for developers, who are comfortable with their capabilities and eager to deploy what they’re already familiar with at their organizations.

Howard then also spoke in detail about the Metaverse. While intriguing in theory, he also believes it will be very difficult to moderate or to control offensive content, and that monetizing the Metaverse for any corporations’ profit interests could be counter-productive to furthering the interests of its users. He and Simon do agree that digital advertising is a likely income stream that will result from the Metaverse; a next-evolution of the personalized advertising we’ve gotten used to in display ads on websites or video platforms.

In the final segment, Howard discusses the importance of trust in the future of AI. While he believes several AI projects are likely overhyped and will eventually go bust, some that are well-designed and execute well could be incredibly valuable and profitable. Companies should hire a “Chief Trust Officer” who can verify the biases purposely imposed on AI models.

Publicly-traded companies mentioned in this interview include Alphabet, Amazon, AMD, Confluent, CrowdStrike, Meta Platforms, Microsoft, MongoDB, and NVIDIA. 7investing’s advisors or its guests may have positions in the companies mentioned. 

Transcript

Simon Erickson  00:05

Hello, everyone, and welcome to this edition of our 7investing podcast where it’s our mission to empower you to invest in your futur. I’m 7investing founder and CEO Simon Erickson and you can learn more about our long term investing approach at 7investing.com. And you can actually see our stock recommendations where we come up with our seven favorite stock market opportunities each and every month. You can learn more about that in your first month free at 7investing.com/subscribe.

 

Simon Erickson  00:29

But today, let’s chat about the digital transformation. This is something a lot of companies are excited about; using new technologies to improve their own operations. And I’m very excited to welcome my guest today. Howard Holton is the Chief Technology Officer of GigaOm. He’s working for a company that’s deploying cutting edge emerging technologies at scale for large enterprises and also helping others make more informed decisions about them. Howard, welcome to our 7investing podcast today.

 

Howard Holton  00:54

Yeah, thanks for having me, Simon, how are you today?

 

Simon Erickson  00:57

Doing wonderful. You’ve had quite a quite a career at the forefront of innovation for two decades. I’m pretty excited to hear some of your insights. So we’re going to chat about hardware. We’re going to talk about processors, we’re going to talk about cloud. We’re going to talk about Metaverse. We got to a full slate of discussion topics.

 

Simon Erickson  01:13

But maybe let’s start at a 10,000 foot level. I mean, Howard you have done this for so long. You’ve described your professional career, in your own words, that leadership is the successful execution of vision. Can you start us off by just talking about where the tech world stands right now? What are some things that people are really excited about? Or that you’re embracing that is being deployed out there?

 

Howard Holton  01:35

So that’s a really good question. I think the tech world stands kind of at the same place it did in the PC revolution. And it’s really interesting, right? We were very comfortable with mainframe for years, and years, and years and years, and then these little boxes came along, that changed the control of power within an organization and the ability to not have to time slice quite the same way and really changed kind of how we worked and how he operated with the PC. We’ve seen this before. Cloud was another another big change in, you know how we computed economics, how we computed value, what we invested in, just as organizations not not not as as necessarily investors, right.

 

Howard Holton  02:23

And today, we’re sitting at the precipice of the AI revolution. And it’s interesting. AI is not a new concept. It’s not a new thing. But what’s interesting about it is AI requires so many things to be in the right place at the right time for it to work. It requires an open legislative worldview, because AI needs to be a little unbound by legislation in order to find the innovation that will absolutely have to be legislated. But it also requires a level of processing that we haven’t really had and the density we’ve had. It kind of requires cloud to be effective. It requires these, these hyperscalars to really enable it, but it also requires data in volumes that we’ve never really had before.

 

Howard Holton  03:13

So I think as an industry, I think technology’s really kind of at the precipice of this next big quantum leap in potential potential to change the world. Potential to change how we interact with it, what we think about.

 

Howard Holton  03:28

I was just reading this morning, there was a video game that right now today has the very first AI powered NPC. So the NPC in game can change how it acts based on the actions of the player. Not based on a script, but based on the actual actions of the player. It’s a super interesting and super exciting time to be alive. I don’t have to mention ChatGPT you know. I’m sure that popped into everybody’s head. But it’s a really interesting time to be on this podcast, to be talking about this to be asked that question.

 

Simon Erickson  04:01

It is. And so let’s go the front lines, right. So let’s let’s dig a little bit deeper into AI.

 

Simon Erickson  04:06

This is not just something that’s being done for fun things. I’ve used Midjourney and I’ve seen the amazing pictures you can use AI to generate. You’re talking about gaming, this isn’t something that’s just beating humans in Go or in Starcraft. Good luck beating against the AI anymore. But now it’s starting to get some enterprise adoption and some interest out there. Where does that stand? I mean, you’re in the frontlines. Like we said, Howard, you’re working with enterprises and helping them understand AI. How do they want to deploy this out there?

 

Howard Holton  04:33

But if we focus on chat GPT and generative AI, there’s a whole lot of discussion about that, and how do I use that in my business? And how do I take advantage of that in my business. And my advice would be enjoy, but be cautious. Because generative AI is not to be trusted.

 

Howard Holton  04:33

Well, I think the big focus is on ChatGPT. And if we talk about that for a second, that’s different. That is a generative  AI that is different than general use AI overall. Within the context of AI there’s a million different use cases and a million different ways to apply it. And many of those are currently in use in business. And should be in use in business and if your business isn’t currently using AI somewhere, you probably should be reviewing it.

 

Howard Holton  05:23

I’m not saying that as some sort of doomsday prediction at all. It simply will make things up.

 

Howard Holton  05:32

And so personally, if you want to use it personally, and you want to write a blog using ChatGPT or you want to read a book or whatever, go for it. Have fun. The blast radius of reputational damage is relatively small for you as an individual. As an organization, however, ChatGPT has made up the answer to mathematics problems. It just doesn’t understand math Well, yet. It’s made up answers to many questions like, how many different inputs are used to train Chat GPT? It makes up the answer because the answer is not publicly available. It has made up the answer on other generative AI projects as to how many training factors did they use? How many training points did they use? It just makes them up. When it doesn’t know the answer, it makes it up.

 

Howard Holton  06:21

And that’s okay. But it’s not okay for something that you may rely on for business. It’s not okay for something, especially that you may rely on for strategy or you may rely on for things that could impact your reputation. So I would really be cautious how you use Chat GPT or other generative API’s in ways where you’re expecting to be able to make money.

 

Simon Erickson  06:46

Okay, so AI is a chronic embellisher of the truth. I knew some people like that back in high school (laughs). And now it’s just a much, much larger scale than this. I mean, we’ve got different large language models, right? Everyone seems to have their flavor of how do you filter all of this information out there. Separate all the actual signals and get rid of the noise basically. And we’ve got different LLM ‘s out there. You’ve got Alphabet’s that is Bard. We know that Facebook’s got Llama. You mentioned GPT. It seems like Elon has just launched his own, that he wants to seek the truth. Everyone wants to have the right filters and the right screens to get the right answers out there.

 

Simon Erickson  07:19

How do you take this? Why do we have so many of these different language models? Which of these is going to endure? Or how do we even progress with AI before we can address some of the hurdles that we’re facing right now?

 

Howard Holton  07:32

Well, to be honest, we’re progressing exactly the way we should be progressing with AI. We need multiple models, we need multiple competitors. We need as much. Think of it like Pandora’s box, right? Once Pandora’s box opens you’re not putting Pandora back in the box. And what we really need is as many eyes on it as possible to say, oh, no, no. Now wait a minute, that’s probably not the right way to do that. We should do it this way instead. Only to have someone then come back and say, Well, yeah, that’s not really the right way either. We probably should do this instead.

 

Howard Holton  07:59

This is the Wild Wild West. We’re all out here staking claims. We’re all out here with pitchforks, trying to find the next goldmine. The reality of the situation is we need a lot of competition, we need far more competition than we had. Exponentially more competition doesn’t mean you’re gonna make money at it. There’s going to be a whole lot of death and destruction along the way of finding what the truth will be. Within generative AI language learning models we are miles away, years away from from reality. And the current version of Chat GPT, just as a single LLM, is version four. Version five is in the works.

 

Howard Holton  08:37

Bard we saw kind of what happened with with version one of Bard. When Google tried to show that live in conferences and it went horribly, horribly wrong. That’s okay. We shouldn’t look at those as failures, but rather kind of steps on the path of learning what the kind of next step is. And we definitely have a kind of, we can call it a garbage in garbage out problem, but it’s not really accurate. You need as many training points for LLM as humanly possible and that effectively means all over the internet.

 

Howard Holton  09:14

The problem is, you shouldn’t trust all of the internet. And therefore any AI learning from all of the internet is going to come along with that same level of distrust and that same level of bias. So it’s really important to be cautious, to be critical, and to be skeptical. But at the same time really pay attention to kind of what’s happening.

 

Howard Holton  09:39

Of all the LLM is the one that concerns me most is Elon to be honest. Because he lately seems to have an attitude of not wanting to be left behind and making some rash decisions. What he’s done with Twitter has really kind of shown an operational side of how he makes decisions that does not imbue me with a lot of trust in his leadership style or his ability to carry that forward. Today, OpenAI is the current leader in these generative AIs and LLMs. But Microsoft’s made a huge investment. Google has shown historically that they absolutely have the ability for achievement in these new spaces. And Facebook, Meta, is sitting on an enormous wealth of data about the people that live in the world that can directly impact and kind of leapfrog LLM in their ability to speak to humans. Which is the point.

 

Howard Holton  10:50

You know, there’s there’s also the additional privacy issues. There’s copyright ownership of content issues that we see within within these LLMs. But I think it’s a wonderful time to participate. My big concern is we’re going to jump to conclusions on legislation. When we look to other places in the world and see what they’re doing. The faster we regulate AI, the better it will likely be for the privacy concerns, I have no doubt about that. At the same time, the faster we regulate AI or legislate AI, the worse it will be for innovation. And we have to keep in mind that this is a globally competitive space. And so we have to balance our legislative desires and thus our privacy protection desires with our desire to remain an innovation leader in the world. And this is the new frontier.

 

Howard Holton  11:51

So we really need to think about and we really need to be cautious and how we do that. My big concern is that they’re going to attempt to legislate the way they have many times in the past. And then I can invite the right people into the room and you need people that understand the purpose of law and how technology operates. To be that bridge, you need to have more than just technologists in the room and just lawyers in the room. You need to have people that really have a firm understanding of how that bridge works. To be able to work through the nuances and make sure that that it’s not just the loudest voice in the room that wins.

 

Simon Erickson  12:28

That’s a great point. There’s a moratorium, a call for a moratorium from several technologists, several business leaders saying, “Hey, let’s slow down until we understand how to control this before it gets out of hand.” Elon Musk, among others, are on there.

 

Simon Erickson  12:40

We have seen that when you open things up to the public, it can be terrifying, right? Do you remember Microsoft’s “Tay” chatbot that went out there and ingested everything from social media and then turned out very badly? They flipped the switch back off, I think, less than 48 hours later.

 

Simon Erickson  12:54

But it seems like the opportunity for AI is to put the right filters on the data to make sure you’re getting high quality data. And then you’re actually trying to get to do whatever the decision that you want to make. Is that the right way to think about this? Are companies that are doing the process in a business constructive way going to make a ton of money from this movement?

 

Howard Holton  13:14

AI has the potential to change business and not because it’s smarter than anybody else. But because it doesn’t have the same biases as we do.

 

Howard Holton  13:21

So we did this, we did an experiment at a former employer on behalf of a retail organization. The retail company wanted to increase revenues by 3%. That’s all they were looking for. Pretty big ask. But you know, that’s the specific ask. And that was the totality of it.

 

Howard Holton  13:38

And so we fed all of that into an AI system, we fed all their data in it, which included all of their camera data, their surveillance data from the stores. And what we expected was the same thing the customer expected. We expect to the AI to come back and say, run a promotion here, lower cost here, add a register here. And instead what it said was make more employees walk through this area of the store. None of us would have thought about that a single solitary person in the room and we’re no dummies, right. And what happened was, what the AI did was it looked at the video surveillance. And what it saw was people would go over to the shelf, they’d pick up an object, they’d look at it, they’d look at it for a minute, they look around for a person, then they’d hold it in one hand, pull out their phone, do something on their phone, wait again, dwell for a bit, put it back on the shelf and walk out of the store.

 

Howard Holton  14:05

And you could almost see them typing in Amazon.com products. And it was effectively just people weren’t getting help in the way that they needed. And once they put people there, they saw more than that 3% increase. That’s the kind of thing that AI is really good at. AI is really good at making connections where the human brain — because of our natural biases and the idea that we know better because we work in it — do not make. And so there is a tremendous amount of potential that exists within AI to change the world in incredible ways and very positive ways. And a ton of money will be made. There is investigation required though, because not all AI is created equally.

 

Simon Erickson  15:13

Digging a layer deeper into how this actually happens behind the scenes. When you’re doing machine learning inference, that’s a ton of computing horsepower you need to do. This GPT has been documented as spending, what is it, hundreds of 1000s of dollars every day just for the computing and the infrastructure that’s behind the scenes of Azure that’s running the applications for these things. We know that NVIDIA and that GPUs have kind of been the workhorse that’s been carrying a lot of that thus far. The parallel computing of GPUs over CPUs. A great move we saw yesterday, or just a couple of days ago, Microsoft unveiled its own AI workhorse chips. I think they were calling them Athena. Custom silicon to address the unique needs of AI workloads.

 

Simon Erickson  15:59

Are you hardware agnostic? Do you think that just we’re going to continue to see NVIDIAs of the world and maybe the AMD’s with the EPYC four processors continue to get iteratively better and better? Or is this going to be something where we see more and more of kind of these these custom chips, like Facebook’s and like Amazon has done now, Microsoft’s doing, to address some very unique slivers of this bigger picture,

 

Howard Holton  16:21

I think we’re always going to see custom chips. It’s kind of funny. 35 or 40 years ago, we saw custom chips everywhere, right? And then the Asics kind of became the name of the game. FPGAs became the name of the game. We can now program the custom code into the chip and have the chip act the way I want it to. And yet, we still see the rise of the CPU, we still see the reliance on the CPU.

 

Howard Holton  16:52

And the fact is, the reason all this stuff exists is because a processor is not one thing. It’s many things.

 

Howard Holton  16:58

It’s a gated array for lack of a better term, that that ultimately is purpose built to solve a problem. We’re not going to see the Intel’s and AMDs of the world kind of go away. If we look at the at the ASIC processors that we all carry around in our little mobile phones, those were supposed to replace x86 hardware a decade ago. And ARM processors have not been able to come anywhere close to really accomplishing it in a massive way. I would argue Apple has been the most successful with their M1 chip. But there are still workloads that simply run extremely well on traditional x86 processors, 64 bit processors. I don’t think we’re gonna see that go away.

 

Howard Holton  17:48

Matter of fact, a really cool AI project, if you if you look into it, is they’re trying to use AI to determine what the next molecular structure of silicon will be that we used to create wafers out of to create processors. Because we feel we’re getting to the end of the silicon that we are currently using. That’s a very interesting project, when you when you kind of look into it. And not something that I would have thought about before I started looking into how it’s made. Of processors.

 

Howard Holton  18:21

But as workloads change, we’re going to follow that same evolutionary roadmap. We’re gonna try it on a CPU first, because it’s easy. That’s the easiest way to get it working. I have all the coding I need to do that. I have all the translation I need to do that at the programming language I need to do that. And then as we find the limitations of the CPU, we’re going to try the next best available thing. In this case, probably a GPU. And then we’ll move to a TPU. And then we’ll likely reach the end the limitations of each of those and go okay, is there enough value within this space and within this workload and within this computing style to spin up a new type of of processor and create a dedicated processor for this particular workload?

 

Howard Holton  19:05

So I think we’re gonna continue to see what we’re seeing. And I think we’re gonna see more of it as as it accelerates. And again, I think this is a place where AI is likely going to be applied.

 

Simon Erickson  19:18

Can you take that maybe one step further too, Howard? And talk about FPGAs a little bit more? This is a field programmable gate array. Like you said, you can actually program the processor to do exactly what you want it to do and that can change over time. That doesn’t have to just be for one static application. Xilinx was what … $40 billion? The acquisition of Xilinx at AMD just made for this very purpose for building out an ecosystem for really smart developers and semiconductor engineers to train their chips to do whatever it is they want. Do you think this is gaining adoption out there? Are you seeing FPGAs more these days?

 

Howard Holton  19:58

So if I want to write code for a CPU, I just write code. If I want to then take advantage of a GPU, I use an API and libraries from Nvidia or AMD to do that, right? And I’m still just writing that code that way.

 

Howard Holton  20:13

FPGAs aren’t done that way. FPGAs are done at the assembly level and require people that really understand how to write to FPGAs. So I need specialized knowledge, special talent to take advantage of an FPGA. And I need a workload that’s really kind of purpose built for that. And the workloads that are purpose built for that are those that move bits around generally. So if I want to build a new piece of networking gear and the last piece uses CPUs and I want to add in intrusion detection (as a terrible example, but as an example), I can use an FPGA to do that. I can offload all my IDP workloads to the FPGA. And the FPGA can process extremely quickly, all of that work. But then I have proprietary hardware. Right, so now I’m burning my own hardware. I’m not using anything off the shelf. I’ve got custom software developers with a very unique skill set. And I’m running custom code that addresses that all of that custom work.

 

Howard Holton  21:24

And it’s interesting, five or six years ago, we kind of moved everything away from custom silicon. All the storage manufacturers, everybody was really pushing towards x86 hardware. No custom silicon. We pulled all our FPGAs out, it’s now consumer off the shelf systems which put all of the intelligence in the software. Which was a great move at the time. We saw a big change in storage, we saw a big change from raid into erasure coding, which was effectively I changed my error correction, my data protection from the controller, and the hardware into software led me to look for a whole bunch of advantages.

 

Howard Holton  22:07

And then we kind of went well, there’s still some value to FPGAs. Now we need to think about FPGAs again. But again, narrow use cases. Very specific use cases for hardware companies. They’re not going to go away. I don’t think they should go away, they serve a very good purpose. But they’re not going to replace a GPU, they’re not going to replace a TPU. They’re going to be in addition to and for very specific purposes. And generally for moving bits around.

 

Simon Erickson  22:35

Fascinating. That’s a two beer conversation right there, Howard. Maybe it’s a two bourbon conversation. Or tea, I know you’re more of a tea guy than a coffee guy.

 

Simon Erickson  22:44

Let’s go back to the cloud. Let’s talk about the infrastructure providers or the cloud Titans out there.

The tech world moves fast, and no one wants to get left behind.

Emerging technologies like generative AI, large language models, and open-source platforms have the potential to completely transform individual businesses or even entire industries. Those who embrace them will profit, while those who don’t will become irrelevant.

Yet a “Hype Cycle” also tends to accommodate new technologies. Several new movements in the tech world that were believed to be the Next Big Thing turned out not to be. 3D printing and NFTs are recent examples.

How should forward-thinking and growth-minded investors separate out the game-changers from the flashes-in-the-pan? What new technologies are actually gaining momentum, and which will never live up to their expectations?

[INSERT YOUTUBE VIDEO HERE.]

To answer those questions, we’ve brought in an expert. 7investing CEO Simon Erickson recently spoke with Howard Holton, the Chief Technology Officer of GigaOm. GigaOm brings the decision-making executives of progressive companies up-to-speed about emerging technologies and then helping to implement them across their organizations. (You can also  see last year’s conversation with GigaOm CEO Ben Book here.)

In Part 1 of their conversation, Simon and Howard first addressed the status quo of generative AI. AI is being used for ‘fun’ things today — like creating lifelike images through MidJourney — but even this requires significant computing power. Howard explains that innovative companies are already deploying AI at scale, but that they need appropriate data strategies and governance policies in order to maximize their success rate. This is similarly true for the flood of recent large language models; those that endure will require filters to curate the noisy flood of data from all across the internet is a way that is actually usable and trustable for businesses. One key advantage of AI over human beings is that it does not have the same biases as humans.

The two then turned their sights on hardware, specifically the custom silicon being designed by hyperscalers like Amazon, Meta Platforms, and Microsoft. Chipmakers like AMD and NVIDIA will still have an endless runway of future demand, though niche applications will also continue to be served by customizable chips like FPGAs.

[INSERT ANCHOR PODCAST HERE.]

In Part 2 (which we will publish on Thursday, April 27th), the cloud’s Infrastructure-as-a-Service providers like Amazon Web Services, Google Cloud Platform, and Microsoft Azure are finding that cloud computing is becoming more commoditized. Each of the Cloud Titans is looking to create a platform for developers, who are comfortable with their capabilities and eager to deploy what they’re already familiar with at their organizations.

Howard then also spoke in detail about the Metaverse. While intriguing in theory, he also believes it will be very difficult to moderate or to control offensive content, and that monetizing the Metaverse for any corporations’ profit interests could be counter-productive to furthering the interests of its users. He and Simon do agree that digital advertising is a likely income stream that will result from the Metaverse; a next-evolution of the personalized advertising we’ve gotten used to in display ads on websites or video platforms.

In the final segment, Howard discusses the importance of trust in the future of AI. While he believes several AI projects are likely overhyped and will eventually go bust, some that are well-designed and execute well could be incredibly valuable and profitable. Companies should hire a “Chief Trust Officer” who can verify the biases purposely imposed on AI models.

Publicly-traded companies mentioned in this interview include Alphabet, Amazon, AMD, Confluent, CrowdStrike, Meta Platforms, Microsoft, MongoDB, and NVIDIA. 7investing’s advisors or its guests may have positions in the companies mentioned. 

Transcript

Simon Erickson  00:05

Hello, everyone, and welcome to this edition of our 7investing podcast where it’s our mission to empower you to invest in your futur. I’m 7investing founder and CEO Simon Erickson and you can learn more about our long term investing approach at 7investing.com. And you can actually see our stock recommendations where we come up with our seven favorite stock market opportunities each and every month. You can learn more about that in your first month free at 7investing.com/subscribe.

 

Simon Erickson  00:29

But today, let’s chat about the digital transformation. This is something a lot of companies are excited about; using new technologies to improve their own operations. And I’m very excited to welcome my guest today. Howard Holton is the Chief Technology Officer of GigaOm. He’s working for a company that’s deploying cutting edge emerging technologies at scale for large enterprises and also helping others make more informed decisions about them. Howard, welcome to our 7investing podcast today.

 

Howard Holton  00:54

Yeah, thanks for having me, Simon, how are you today?

 

Simon Erickson  00:57

Doing wonderful. You’ve had quite a quite a career at the forefront of innovation for two decades. I’m pretty excited to hear some of your insights. So we’re going to chat about hardware. We’re going to talk about processors, we’re going to talk about cloud. We’re going to talk about Metaverse. We got to a full slate of discussion topics.

 

Simon Erickson  01:13

But maybe let’s start at a 10,000 foot level. I mean, Howard you have done this for so long. You’ve described your professional career, in your own words, that leadership is the successful execution of vision. Can you start us off by just talking about where the tech world stands right now? What are some things that people are really excited about? Or that you’re embracing that is being deployed out there?

 

Howard Holton  01:35

So that’s a really good question. I think the tech world stands kind of at the same place it did in the PC revolution. And it’s really interesting, right? We were very comfortable with mainframe for years, and years, and years and years, and then these little boxes came along, that changed the control of power within an organization and the ability to not have to time slice quite the same way and really changed kind of how we worked and how he operated with the PC. We’ve seen this before. Cloud was another another big change in, you know how we computed economics, how we computed value, what we invested in, just as organizations not not not as as necessarily investors, right.

 

Howard Holton  02:23

And today, we’re sitting at the precipice of the AI revolution. And it’s interesting. AI is not a new concept. It’s not a new thing. But what’s interesting about it is AI requires so many things to be in the right place at the right time for it to work. It requires an open legislative worldview, because AI needs to be a little unbound by legislation in order to find the innovation that will absolutely have to be legislated. But it also requires a level of processing that we haven’t really had and the density we’ve had. It kind of requires cloud to be effective. It requires these, these hyperscalars to really enable it, but it also requires data in volumes that we’ve never really had before.

 

Howard Holton  03:13

So I think as an industry, I think technology’s really kind of at the precipice of this next big quantum leap in potential potential to change the world. Potential to change how we interact with it, what we think about.

 

Howard Holton  03:28

I was just reading this morning, there was a video game that right now today has the very first AI powered NPC. So the NPC in game can change how it acts based on the actions of the player. Not based on a script, but based on the actual actions of the player. It’s a super interesting and super exciting time to be alive. I don’t have to mention ChatGPT you know. I’m sure that popped into everybody’s head. But it’s a really interesting time to be on this podcast, to be talking about this to be asked that question.

 

Simon Erickson  04:01

It is. And so let’s go the front lines, right. So let’s let’s dig a little bit deeper into AI.

 

Simon Erickson  04:06

This is not just something that’s being done for fun things. I’ve used Midjourney and I’ve seen the amazing pictures you can use AI to generate. You’re talking about gaming, this isn’t something that’s just beating humans in Go or in Starcraft. Good luck beating against the AI anymore. But now it’s starting to get some enterprise adoption and some interest out there. Where does that stand? I mean, you’re in the frontlines. Like we said, Howard, you’re working with enterprises and helping them understand AI. How do they want to deploy this out there?

 

Howard Holton  04:33

But if we focus on chat GPT and generative AI, there’s a whole lot of discussion about that, and how do I use that in my business? And how do I take advantage of that in my business. And my advice would be enjoy, but be cautious. Because generative AI is not to be trusted.

 

Howard Holton  04:33

Well, I think the big focus is on ChatGPT. And if we talk about that for a second, that’s different. That is a generative  AI that is different than general use AI overall. Within the context of AI there’s a million different use cases and a million different ways to apply it. And many of those are currently in use in business. And should be in use in business and if your business isn’t currently using AI somewhere, you probably should be reviewing it.

 

Howard Holton  05:23

I’m not saying that as some sort of doomsday prediction at all. It simply will make things up.

 

Howard Holton  05:32

And so personally, if you want to use it personally, and you want to write a blog using ChatGPT or you want to read a book or whatever, go for it. Have fun. The blast radius of reputational damage is relatively small for you as an individual. As an organization, however, ChatGPT has made up the answer to mathematics problems. It just doesn’t understand math Well, yet. It’s made up answers to many questions like, how many different inputs are used to train Chat GPT? It makes up the answer because the answer is not publicly available. It has made up the answer on other generative AI projects as to how many training factors did they use? How many training points did they use? It just makes them up. When it doesn’t know the answer, it makes it up.

 

Howard Holton  06:21

And that’s okay. But it’s not okay for something that you may rely on for business. It’s not okay for something, especially that you may rely on for strategy or you may rely on for things that could impact your reputation. So I would really be cautious how you use Chat GPT or other generative API’s in ways where you’re expecting to be able to make money.

 

Simon Erickson  06:46

Okay, so AI is a chronic embellisher of the truth. I knew some people like that back in high school (laughs). And now it’s just a much, much larger scale than this. I mean, we’ve got different large language models, right? Everyone seems to have their flavor of how do you filter all of this information out there. Separate all the actual signals and get rid of the noise basically. And we’ve got different LLM ‘s out there. You’ve got Alphabet’s that is Bard. We know that Facebook’s got Llama. You mentioned GPT. It seems like Elon has just launched his own, that he wants to seek the truth. Everyone wants to have the right filters and the right screens to get the right answers out there.

 

Simon Erickson  07:19

How do you take this? Why do we have so many of these different language models? Which of these is going to endure? Or how do we even progress with AI before we can address some of the hurdles that we’re facing right now?

 

Howard Holton  07:32

Well, to be honest, we’re progressing exactly the way we should be progressing with AI. We need multiple models, we need multiple competitors. We need as much. Think of it like Pandora’s box, right? Once Pandora’s box opens you’re not putting Pandora back in the box. And what we really need is as many eyes on it as possible to say, oh, no, no. Now wait a minute, that’s probably not the right way to do that. We should do it this way instead. Only to have someone then come back and say, Well, yeah, that’s not really the right way either. We probably should do this instead.

 

Howard Holton  07:59

This is the Wild Wild West. We’re all out here staking claims. We’re all out here with pitchforks, trying to find the next goldmine. The reality of the situation is we need a lot of competition, we need far more competition than we had. Exponentially more competition doesn’t mean you’re gonna make money at it. There’s going to be a whole lot of death and destruction along the way of finding what the truth will be. Within generative AI language learning models we are miles away, years away from from reality. And the current version of Chat GPT, just as a single LLM, is version four. Version five is in the works.

 

Howard Holton  08:37

Bard we saw kind of what happened with with version one of Bard. When Google tried to show that live in conferences and it went horribly, horribly wrong. That’s okay. We shouldn’t look at those as failures, but rather kind of steps on the path of learning what the kind of next step is. And we definitely have a kind of, we can call it a garbage in garbage out problem, but it’s not really accurate. You need as many training points for LLM as humanly possible and that effectively means all over the internet.

 

Howard Holton  09:14

The problem is, you shouldn’t trust all of the internet. And therefore any AI learning from all of the internet is going to come along with that same level of distrust and that same level of bias. So it’s really important to be cautious, to be critical, and to be skeptical. But at the same time really pay attention to kind of what’s happening.

 

Howard Holton  09:39

Of all the LLM is the one that concerns me most is Elon to be honest. Because he lately seems to have an attitude of not wanting to be left behind and making some rash decisions. What he’s done with Twitter has really kind of shown an operational side of how he makes decisions that does not imbue me with a lot of trust in his leadership style or his ability to carry that forward. Today, OpenAI is the current leader in these generative AIs and LLMs. But Microsoft’s made a huge investment. Google has shown historically that they absolutely have the ability for achievement in these new spaces. And Facebook, Meta, is sitting on an enormous wealth of data about the people that live in the world that can directly impact and kind of leapfrog LLM in their ability to speak to humans. Which is the point.

 

Howard Holton  10:50

You know, there’s there’s also the additional privacy issues. There’s copyright ownership of content issues that we see within within these LLMs. But I think it’s a wonderful time to participate. My big concern is we’re going to jump to conclusions on legislation. When we look to other places in the world and see what they’re doing. The faster we regulate AI, the better it will likely be for the privacy concerns, I have no doubt about that. At the same time, the faster we regulate AI or legislate AI, the worse it will be for innovation. And we have to keep in mind that this is a globally competitive space. And so we have to balance our legislative desires and thus our privacy protection desires with our desire to remain an innovation leader in the world. And this is the new frontier.

 

Howard Holton  11:51

So we really need to think about and we really need to be cautious and how we do that. My big concern is that they’re going to attempt to legislate the way they have many times in the past. And then I can invite the right people into the room and you need people that understand the purpose of law and how technology operates. To be that bridge, you need to have more than just technologists in the room and just lawyers in the room. You need to have people that really have a firm understanding of how that bridge works. To be able to work through the nuances and make sure that that it’s not just the loudest voice in the room that wins.

 

Simon Erickson  12:28

That’s a great point. There’s a moratorium, a call for a moratorium from several technologists, several business leaders saying, “Hey, let’s slow down until we understand how to control this before it gets out of hand.” Elon Musk, among others, are on there.

 

Simon Erickson  12:40

We have seen that when you open things up to the public, it can be terrifying, right? Do you remember Microsoft’s “Tay” chatbot that went out there and ingested everything from social media and then turned out very badly? They flipped the switch back off, I think, less than 48 hours later.

 

Simon Erickson  12:54

But it seems like the opportunity for AI is to put the right filters on the data to make sure you’re getting high quality data. And then you’re actually trying to get to do whatever the decision that you want to make. Is that the right way to think about this? Are companies that are doing the process in a business constructive way going to make a ton of money from this movement?

 

Howard Holton  13:14

AI has the potential to change business and not because it’s smarter than anybody else. But because it doesn’t have the same biases as we do.

 

Howard Holton  13:21

So we did this, we did an experiment at a former employer on behalf of a retail organization. The retail company wanted to increase revenues by 3%. That’s all they were looking for. Pretty big ask. But you know, that’s the specific ask. And that was the totality of it.

 

Howard Holton  13:38

And so we fed all of that into an AI system, we fed all their data in it, which included all of their camera data, their surveillance data from the stores. And what we expected was the same thing the customer expected. We expect to the AI to come back and say, run a promotion here, lower cost here, add a register here. And instead what it said was make more employees walk through this area of the store. None of us would have thought about that a single solitary person in the room and we’re no dummies, right. And what happened was, what the AI did was it looked at the video surveillance. And what it saw was people would go over to the shelf, they’d pick up an object, they’d look at it, they’d look at it for a minute, they look around for a person, then they’d hold it in one hand, pull out their phone, do something on their phone, wait again, dwell for a bit, put it back on the shelf and walk out of the store.

 

Howard Holton  14:05

And you could almost see them typing in Amazon.com products. And it was effectively just people weren’t getting help in the way that they needed. And once they put people there, they saw more than that 3% increase. That’s the kind of thing that AI is really good at. AI is really good at making connections where the human brain — because of our natural biases and the idea that we know better because we work in it — do not make. And so there is a tremendous amount of potential that exists within AI to change the world in incredible ways and very positive ways. And a ton of money will be made. There is investigation required though, because not all AI is created equally.

 

Simon Erickson  15:13

Digging a layer deeper into how this actually happens behind the scenes. When you’re doing machine learning inference, that’s a ton of computing horsepower you need to do. This GPT has been documented as spending, what is it, hundreds of 1000s of dollars every day just for the computing and the infrastructure that’s behind the scenes of Azure that’s running the applications for these things. We know that NVIDIA and that GPUs have kind of been the workhorse that’s been carrying a lot of that thus far. The parallel computing of GPUs over CPUs. A great move we saw yesterday, or just a couple of days ago, Microsoft unveiled its own AI workhorse chips. I think they were calling them Athena. Custom silicon to address the unique needs of AI workloads.

 

Simon Erickson  15:59

Are you hardware agnostic? Do you think that just we’re going to continue to see NVIDIAs of the world and maybe the AMD’s with the EPYC four processors continue to get iteratively better and better? Or is this going to be something where we see more and more of kind of these these custom chips, like Facebook’s and like Amazon has done now, Microsoft’s doing, to address some very unique slivers of this bigger picture,

 

Howard Holton  16:21

I think we’re always going to see custom chips. It’s kind of funny. 35 or 40 years ago, we saw custom chips everywhere, right? And then the Asics kind of became the name of the game. FPGAs became the name of the game. We can now program the custom code into the chip and have the chip act the way I want it to. And yet, we still see the rise of the CPU, we still see the reliance on the CPU.

 

Howard Holton  16:52

And the fact is, the reason all this stuff exists is because a processor is not one thing. It’s many things.

 

Howard Holton  16:58

It’s a gated array for lack of a better term, that that ultimately is purpose built to solve a problem. We’re not going to see the Intel’s and AMDs of the world kind of go away. If we look at the at the ASIC processors that we all carry around in our little mobile phones, those were supposed to replace x86 hardware a decade ago. And ARM processors have not been able to come anywhere close to really accomplishing it in a massive way. I would argue Apple has been the most successful with their M1 chip. But there are still workloads that simply run extremely well on traditional x86 processors, 64 bit processors. I don’t think we’re gonna see that go away.

 

Howard Holton  17:48

Matter of fact, a really cool AI project, if you if you look into it, is they’re trying to use AI to determine what the next molecular structure of silicon will be that we used to create wafers out of to create processors. Because we feel we’re getting to the end of the silicon that we are currently using. That’s a very interesting project, when you when you kind of look into it. And not something that I would have thought about before I started looking into how it’s made. Of processors.

 

Howard Holton  18:21

But as workloads change, we’re going to follow that same evolutionary roadmap. We’re gonna try it on a CPU first, because it’s easy. That’s the easiest way to get it working. I have all the coding I need to do that. I have all the translation I need to do that at the programming language I need to do that. And then as we find the limitations of the CPU, we’re going to try the next best available thing. In this case, probably a GPU. And then we’ll move to a TPU. And then we’ll likely reach the end the limitations of each of those and go okay, is there enough value within this space and within this workload and within this computing style to spin up a new type of of processor and create a dedicated processor for this particular workload?

 

Howard Holton  19:05

So I think we’re gonna continue to see what we’re seeing. And I think we’re gonna see more of it as as it accelerates. And again, I think this is a place where AI is likely going to be applied.

 

Simon Erickson  19:18

Can you take that maybe one step further too, Howard? And talk about FPGAs a little bit more? This is a field programmable gate array. Like you said, you can actually program the processor to do exactly what you want it to do and that can change over time. That doesn’t have to just be for one static application. Xilinx was what … $40 billion? The acquisition of Xilinx at AMD just made for this very purpose for building out an ecosystem for really smart developers and semiconductor engineers to train their chips to do whatever it is they want. Do you think this is gaining adoption out there? Are you seeing FPGAs more these days?

 

Howard Holton  19:58

So if I want to write code for a CPU, I just write code. If I want to then take advantage of a GPU, I use an API and libraries from Nvidia or AMD to do that, right? And I’m still just writing that code that way.

 

Howard Holton  20:13

FPGAs aren’t done that way. FPGAs are done at the assembly level and require people that really understand how to write to FPGAs. So I need specialized knowledge, special talent to take advantage of an FPGA. And I need a workload that’s really kind of purpose built for that. And the workloads that are purpose built for that are those that move bits around generally. So if I want to build a new piece of networking gear and the last piece uses CPUs and I want to add in intrusion detection (as a terrible example, but as an example), I can use an FPGA to do that. I can offload all my IDP workloads to the FPGA. And the FPGA can process extremely quickly, all of that work. But then I have proprietary hardware. Right, so now I’m burning my own hardware. I’m not using anything off the shelf. I’ve got custom software developers with a very unique skill set. And I’m running custom code that addresses that all of that custom work.

 

Howard Holton  21:24

And it’s interesting, five or six years ago, we kind of moved everything away from custom silicon. All the storage manufacturers, everybody was really pushing towards x86 hardware. No custom silicon. We pulled all our FPGAs out, it’s now consumer off the shelf systems which put all of the intelligence in the software. Which was a great move at the time. We saw a big change in storage, we saw a big change from raid into erasure coding, which was effectively I changed my error correction, my data protection from the controller, and the hardware into software led me to look for a whole bunch of advantages.

 

Howard Holton  22:07

And then we kind of went well, there’s still some value to FPGAs. Now we need to think about FPGAs again. But again, narrow use cases. Very specific use cases for hardware companies. They’re not going to go away. I don’t think they should go away, they serve a very good purpose. But they’re not going to replace a GPU, they’re not going to replace a TPU. They’re going to be in addition to and for very specific purposes. And generally for moving bits around.

 

Simon Erickson  22:35

Fascinating. That’s a two beer conversation right there, Howard. Maybe it’s a two bourbon conversation. Or tea, I know you’re more of a tea guy than a coffee guy.

 

Simon Erickson  22:44

Let’s go back to the cloud. Let’s talk about the infrastructure providers or the cloud Titans out there.

Recent Episodes

Long-Term Investing Ideas in a Volatile Market

Simon recently spoke with a $35 billion global asset manager about how they're navigating the market volatility. The key takeaways are to think long term, tune out the noise...

Wreck or Rebound – Part 3! With Anirban Mahanti, Matt Cochrane...

Anirban and Matthew were joined by Alex Morris, creator of the TSOH Investment Research Service, to look at seven former market darlings that have taken severe dives from...

No Limit with Krzysztof and Luke – Episode 5

On episode 5 of No Limit, Krzysztof won’t let politics stand in the way of a good discussion - among many other topics!