C3 AI Knows How to Deploy AI at Scale. It Might Be a 10X Opportunity for Investors. - 7investing 7investing
Stock Tips Mobile Menu Dropdown Icon

C3 AI Knows How to Deploy AI at Scale. It Might Be a 10X Opportunity for Investors.

C3.AI's Chief Executive Tom Siebel is a Silicon Valley billionaire who knows how to deploy technology at scale. That could make the company an intriguing investment opportunity.

May 10, 2023

C3.AI’s (NYSE: AI) Chief Executive Tom Siebel is a Silicon Valley billionaire. And he knows a thing or two about deploying technology at scale.

In the early 90’s, his company Siebel Systems built upon the momentum of Oracle’s (NYSE: ORCL) relational database commercialization to create a new market called “Customer Relationship Management”. CRM allowed companies to collect and store information about their prospective customers, making it much more efficient to conduct marketing campaigns, land new sales deals, and provide service follow-up for any issues.

The CRM market caught on globally and has reached a massive enterprise scale, worth nearly $80 billion today.

And while Siebel sold his company to Oracle back in 2005 for a cool $5.8 billion, he’s ready to re-join the tech party and participate in its fascination with artificial intelligence.

AI is Different Than CRM

A new decade calls for a new tech acronym. Yet AI is a completely different beast than CRM.

Siebel explained at last week’s MIT EmTech Digital Conference that AI is perfectly suited for physical systems. We’re perfectly happy to ask the machines to monitor and optimize things like pressures or temperatures in industrial settings. Predictive maintenance — using sensors for continual monitoring to identify potential device failure — is actually one of the largest applications of AI. Shell (NYSE: SHEL) is C3’s largest customer, who is recognizing $2 billion of value each year from its global platform deployment.

Where things get trickier is when the decisions are more subjective. Where there’s not a mathematical answer, AI looks to how it was trained to come up with the most-likely answer. And that is where ethics starts to become a gray area.

Sociology is becoming a crucial piece of AI design, to ensure large-scale deployments aren’t perpetuating cultural bias or causing societal harm.

We’ve seen this story play out before. In 2016, Microsoft (Nasdaq: MSFT) unveiled its brilliant and playful new chatbot “Tay” to social media. In theory, it was meant to learn from others and provide helpful information and fun conversations. In reality, it saw society’s dark side and become a misogynistic racist who was responding in ways that were both offensive and abusive. Microsoft pulled the plug on Tay less than 24 hours after it launched.  

Siebel didn’t pull any punches when describing the dangers of social media, saying it “might be the most destructive invention in the history of the world.” He pointed out that it allows for the manipulations of billions of people who are addicted to the release of dopamine, and has led to problems that range from depression and poor self esteem to slave trading and interference in democratic elections.

This is why C3 AI has taken a completely different approach to the AI movement than OpenAI.

OpenAI opened the floodgates last November, launching its GPT-3 large language model to the world and immediately attracting 100 million users within just two months. Now, it looks to offset its hefty costs (mostly in the computing costs for upfront model training and ongoing inference) by attracting an enterprise following. It’s not so different than the approach Uber (NYSE: UBER) took in the ridesharing industry.

Yet C3 is being far more methodical. Siebel’s credibility has gotten him in the door with the world’s highest-level corporate executives and Washington DC administrators. They’ve asked for him to deploy massive projects, ranging from surveillance to identify extremists to recruiting and promoting in the US to setting premiums for health insurance.

And after four decades of leadership roles, Tom knows what’s at stake here. The last thing he wants is to take on a high-profile “Tay project” that goes horribly wrong, which would cause collateral damage both to society and also to his own reputation.

Instead, Siebel is avoiding landmines and is placing bets on the opportunities where AI will make the most sense. He sees precision health, to prevent medical conditions before they occur (and at a lower cost), as a multi-billion dollar opportunity for the veterans administration. Yet he warns that the use of AI to rationalize health care and set insurance rates based upon a patient’s genome would be “unethical and disturbing.”

The Role of Regulations

There’s also a higher-power at play, which is the regulations intended to steer the responsible deployment of AI. There are 99 problems with AI: privacy issues, fake news issues, election tampering issues, et cetera.

AI is extraordinary powerful. It’s more powerful than the steam engine, than World War 2, or than communism. The potential consequences make the Orwelian Future look like the Garden of Eden. We need to discuss what the implications are. 

Furthermore, the regulatory proposals thus far have been “kind of crazy.” One has suggested forming a “federal algorithm association” that would regulate all AI algorithms before they can be publicly available. But there are hundreds of millions of new algorithms created every single day. It seems highly unlikely that a government-funded FAA could possibly keep pace with all of them. 

Other idea was to have an AI research moratorium that would last for 6 months. This got quite a bit of media attention, and even support of several executives including Elon Musk. But this seems unlikely as well. Even in enacted, it would be impossible to enforce. And any organizations who continued their development even outside of the regulations — whether they be from MIT, Shanghai, or Russia — would have a competitive advantage over those who abided by the law.

And finally, it’s worth noting that regulations are regional in nature and very difficult to apply to a global user base. The European Union has different regulations than China. Even within the US, California has different regulations than Pennsylvania (or any other state). The “world wide web” is becoming a splintered collection of regional internets.

Siebel hasn’t yet seen any proposed regulations that truly make any sense. But his stance is that commercial enterprises who deploy AI — including his own C3 — should be mindful and methodical about the projects they agree to.

The 7investing Key Takeaway

Even with hints of a dystopian future and colorful description of AI’s potential harms, Tom concluded his presentation on an inspiring note. He employs 1,000 people today and expects he will employ 10,000 very soon. Upfront conversations with CEOs and Chief Data Officers can define whether projects are doable, valuable, and ethical. Companies should also be upfront about how they’re training their neural networks, unafraid to disclose their biases.

Those transparent discussions can help to steer AI in the right direction. And that’s good for society, for C3 AI, and for its long-term investors.

Join 7investing's Free Email List

Already a 7investing member? Log in here.