C3.ai, Inc. (AI) Deutsche Bank Technology Conference Transcript | Seeking Alpha

2022-09-04 10:55:04 By : Mr. ydel ydel

C3.ai, Inc. (NYSE:AI ) Deutsche Bank Technology Conference September 1, 2022 2:45 PM ET

Ed Abbo – President and Chief Technology Officer

Juho Parkkinen – Chief Financial Officer

I think we're live again. Welcome back, everybody. I'm Brad Zelnick with the Deutsche Bank software team, really delighted to be joined for this session by the good folks from C3 AI. We are specifically joined by Ed Abbo, President and CTO; and Juho Parkkinen…

…CFO of the company. Welcome, guys. Thank you so much for being here.

Format of this presentation is originally meant to be a fireside chat. There is some chance that we may even go to some visuals here in the room for – if the conversation takes us there. But maybe we'll dive right in. And then if we have time, maybe towards the end, we'll open it up to a question or two from the room. Is that okay?

All right. Very cool. Maybe to start out, you reported somewhat of an eventful Q1 last night, and I would just like to start out with understanding what the key messages are that you should hope investors walked away with from that call.

Yes. No, thanks for that question. So yesterday, we had our earnings call. And the key message is really would be around, the results were actually quite good for Q1 from my perspective. We updated our sales model. So we are going to have a consumption-based pricing, which we're introducing immediately and then we accelerated our path to profitability. So those were the key items discussed yesterday. We provided supplemental materials as part of our earnings materials on our website, which outlines model assumptions that we encourage the investing community takes a look at. And hopefully, it helps them understand the short-term impact on the top-line as well as the long-term accelerated growth that we expect to see a few quarters out after the new model that kicks in.

I could quickly run down some of the Q1 items. So we did revenue of $65.3 million, subscription revenue of $57 million, both of these were about 25% increase year-over-year. Our non-GAAP gross profit was at $52.6 million. Our non-GAAP gross profit margin is at 81%. RPO still about $460 million and current RPO of $173.5 million, which was a 20% year-over-year increase and 2% sequential increase. One of the things that we're very excited about for the quarter results was the number of deals. We had 31 deals during the quarter, which is a record for us. That is a 30% more year-over-year and 15% from Q4 sequentially. And finally, we ended at 228 customers, which also is a 27% increase year-over-year on a customer count and we're pretty excited about that. And again, going back to the top-line assumptions that come from the consumption-based pricing, there were really three or four key items that I'd like to make sure that the investment community understands. And I think Ed can talk about some of the product related items first.

Sounds good. Yes. Brad, the consumption-based pricing is something that we've been planning for quite some time. So this is not something we did casually. And obviously, what it does by lowering the initial commitment and companies basically or customers pay as they go, model that's going to expand the customer base and accelerate sales. So that's one of the things. But the measures that we have taken which basically have – some of which have been in the works for like years, Version 8 of the product, which we announced earlier is a four-year product in the making. So in addition to lowering the onboarding pricing commitment to use the C3 AI product, we've made significant enhancements to the product, the C3 AI platform to accelerate onboarding of developers, data scientists and citizen data scientists. And so their enhancements to the tooling for no code, low code, deep code capabilities, then there's basically all the things that you need for scaling up and supporting significantly larger developer community. So that's online tutorials, online community, developer community and contact sense of help, all of that stuff. So it's all the foundation in addition to the scalability of the product that we put in place.

Then from a sales organization, you know that we've restructured the sales organization. So now we have a team that has frankly are domain experts in the industries that we sell into. That's another key move that we've made over the past couple of quarters. And then taking a step back is it's not only the way that customers want to buy, which is consumption versus upfront subscription commitments, but it's also the way that our partners price too. So it's consistent with the cloud partners and how they price their products. And it's also consistent with their marketplaces, the way that customers buy from the cloud products. So all in all, this is, again, it's something that's been in the making for in part of these measures in years, and we're super excited to bring this to market now.

Thanks for the update. Maybe if I could just ask a couple of follow-up questions. I mean, Tom is a visionary. Tom is a legend. Tom is a genius. I assume there's a lot of thought that's gone into this. But at the same time, I also think that it's logical just question, making so many changes all at the same time, you kind of lack of control variable, if you will, and especially probably the most volatile of all the variables at play is the environment here, which you guys acknowledge. And I think you're not unlike many software companies we've been hearing from. But just in terms of the timing of rolling this all out now, how do we as investors just get comfortable with that decision?

Well, I think the – if you look at the selling dynamic out in the market today, so the subscription model, basically, if you look at the way we sold under subscription, it was – you start off with a pilot for $0.5 million, get that successful and then there's a 3-year commitment that we walked into. And that was any – ranging anywhere from $1 million, $5 million, $10 million, sometimes larger, $30 million, $35 million. In this environment, it requires actually not just CFO approval, CEO approval and sometimes like board-level approvals. And so those kind of commitments are really being scrutinized. And so the change is actually, it's a good timing to do this, Brad, and just allowing customers to kind of get on the platform and use it and pay as they become successful. And the – we're very confident with the model because we deploy, we can get to this later, but we deploy applications very quickly. The payback period is very short. And then the next application can be funded with the first applications payback, et cetera. So it's actually – it's a perfect model for this environment.

Thank you. Maybe if we could double-click on some of these changes that you talked about, consumption model pricing, lowering the sort of entry point makes a lot of sense. As we think about go-to-market, can you just talk a little bit more about the changes that are being made there? But also from a go-to-market perspective, it's once a year has already started, right, you're now in Q2, it's unusual to then make a change. Usually, the change all happens coming out of Q4. So again, just the timing and exactly what those changes are and what's expected to come with them, if we can double-click there would be great.

Yes, I think the – just to go back to the partner model for a moment.

Because I think that's super important. The – what we're doing is basically enabling the partners to sell in a way that's consistent with the way that they sell. And so this consumption pricing, again, if you're Google or Amazon or Azure, that's how they price. And so being on the marketplaces of the cloud vendors allows customers to basically buy C3 AI as they would the cloud services and other capabilities. The other significant expansion to the Google partnership that we announced is basically Google funding 50% of over 100, 134 pilots to be specific. So basically, that even reduces the entry point for customers even more on the GCP or Google Cloud Platform. So those are all kind of 3 or 4 measures that we basically enable to dramatically scale up our business in terms of number of customers, accelerates sales by lowering the hurdle and basically taking market share as a result of that. So…

That makes sense. So we've got go-to-market, we've got consumption. Remind me of the other because I did take a look been a little busy here at the conference, but I look at the transcript, there are a couple of other things that we talked about as well.

Well, the product, the Version 8 of the product.

Basically, which is four years in the making is a very significant. So timing-wise, it worked out because that was released. We had our customer user group earlier in the year. And that also enables scaling up of our customer base dramatically. So the tooling has been simplified to onboard customer developers, customer data scientists, and what's referred to as citizen basically, data scientists or business analysts to use the product very easily. And basically, Brad, it's worth commenting on that our model is different than some other software companies in the sense that we're actually in the business of making our customers and partners self-sufficient at building complex applications rather than us doing it ourselves. And so this is a very big and important product release that is Version 8, and the model is we will work with the customer and/or the professional services partner to build the first – and build and deploy the first applications and train them along the way so that they can then basically scale up and build their own after that, highly scalable model tool.

Cool. And I think part of that, I'd just like to add to his point is that in the new model, we also have unlimited developers that will be part of the customer, which we expect to have a lot more activity from the customer side, they will more further and develop more applications that are on site. So we're quite excited about that as well.

Cool. I don't want to harp too much on the change – the recent news and the changes. If we can, maybe we zoom out a little bit and for those less familiar, if we could just kind of go back to the longer-term vision and the story here and maybe I'll just open it up. Just I mean how would you characterize where are we in the use of AI in the enterprise in general? And what are some of the earlier use cases that are driving adoption to date?

It's a good question, Brad. What we see is basically there are a lot of, what I call, science experiments out there in the customer base. So companies have established, they figured out, well, we need a data science team and they have staffed those up. But the reality is developing the models, the AI machine learning models is basically as difficult as that is, is only 5% of getting a system deployed and scaled out there. And so, what we have seen is with the C3 AI platform is some very significant success stories where AI has been deployed at enterprise scale. And we've covered some of those and reported progress on those on a quarterly basis, Shell oil, Shell being one of them. As you – some of you might be familiar with in the oil and gas industry, about a third of the time is what they refer to is non-productive time.

So there's a failure in equipment, and they're not pumping oil. So what they're – what Shell has done is actually fully deployed and they have – they're monitoring 13,000 pieces of equipment across 25 assets. And an asset is, it's not something small. This is like a city scale infrastructure. So it's an offshore oil rig. It's a massive refinery. An example of that for Shell is Pernis, which is one of Europe's largest refineries. They have about 400,000 barrels of oil per day. These are all basically being monitored by C3 AI and the reliability applications that they've deployed. So basically, there are C3 AI success stories, which are delivering hundreds of millions, billions in actual economic value.

Another one that we talked about yesterday and you gave an update is the U.S. Air Force and the Rapid Sustainability Office. They've deployed across 16 aircraft platforms to improve the readiness rates. And so, this is the use of AI to look at which components on an aircraft platform or an aircraft are likely to fail, whether it's the flat actuator or the auxiliary power unit, et cetera. And what we're doing there is basically taking all the telemetry from those components and subsystems on the aircraft. We're taking maintenance records. We're taking weather information, and we're basically unifying that and serving it up to algorithms that basically anticipate 50 to 100 flight hours in advance, whether that component is likely to fail or not, so it can be addressed in a scheduled maintenance. And their value assessment, Defense Innovation Unit value assessment, is $5 billion a year in potential when this is fully deployed across those aircraft. So the short answer is there are a lot of science experiments out there, but C3 AI has some very significant deployments at enterprise scale.

Thank you for that context. And maybe just as I reflect on some of the challenges that we hear about with the adoption of AI software in general in production environments, in the enterprise, challenges of data integration, steep learning curves, data quality, model governance, how does C3 help customers in these regards?

Yes, Brad, I know we both been in the enterprise space for quite some time. These enterprise AI applications are an order of magnitude potentially, 2 orders of magnitude, more difficult to build than the traditional enterprise applications like your Oracles and SAPs of the world, they actually require orchestrating tens or hundreds of primitives in the cloud and these cloud services. And so what we have done is basically built a cohesive, comprehensive application platform-as-a-service. That's technically what the C3 AI platform as it's an application platform-as-a-service. It sits on top of the cloud platforms, if you will. And it orchestrates all of those tens and hundreds of services without having to write a bunch of code.

So we basically reduced the amount of code, you need to write by a factor of 50 to 100, which means you can develop very complex AI applications with small teams to people in weeks, not months and years. And one of the banks that we work with, I should say, we took deploying an AI system and models from 1.5 years, which really to under a month in four weeks, they're able to basically get AI models deployed into production.

And that, by the way, includes the model review management process that are pretty strict in banks, including the MRM process. So basically, we handle the data integration, the data aggregation from hundreds of systems. We handle the model development, the model management, the model governance, data lineage, capabilities and we basically help operationalize these at a very large scale – these systems at very large scale, Brad.

Yes. And as we think about the environment we're in, ROI is being heavily securitized by customers time to value is crucially important. How do you compress that time to value in these really complex type projects that we're talking about?

Fundamentally, Brad, to develop and deploy an application, you have to align four stakeholder groups. And one obviously is the business. So you have a subject matter expert from the business saying is the exact problem we need to solve. Second are the people that – in IT that do the data aggregation, we call them data engineers. The third are the data science team. It's another separate group. Fourth, the application developers. And what the C3 AI platform does is basically provide a cohesive environment that all those stakeholders can actually collaborate on and reduce importantly, as I mentioned before, reduce the amount of code that's being written by extracting the application from the underlying services.

So we're dramatically reducing that and we're deploying applications. One of the fastest into production was in four weeks, a very complex application deployed in four weeks for a manufacturer to inventory optimization and basically getting those deployed and then taking the benefits from some of that and investing it in the next application is actually how we scale up very quickly.

I'd say the second piece is packaging up applications. So we have 40 prebuilt applications that we've developed and invested in over the past – over a decade now that are industry-specific. So manufacturing, if you look at the applications there, it's everything from AI-based demand forecasting to stochastic inventory optimization to supplier parts characterization, huge topic over the past couple of years is supplier delivery. So we have a supply network risk application that's prebuilt, proven, tried, tested, delivered in many customers.

And all of these are accelerators to customer getting benefit. Why would you want to build that from scratch if you can just get something that's been proven at a large company before, you just take it, configure it, drop it into production, get value instantly. So in addition to having an AI platform that customers can use to build apps, they basically have apps for manufacturing, aerospace, defense, intelligence, ESG applications, AI CRM applications out of the box available for them to deploy quickly.

And I want to just put a good plug on this one that the – if you go to ir.c3.ai, we uploaded recent new demo videos of most of these applications that Ed was just talking about. And we also are opening up next week registration for live demo sessions over the next two months. So what is this interested, there will be a live demo session with the product manager who can do Q&A and kind of walk through what these various applications do.

Today, if I look at where you've had a lot of traction, it seems oil and gas utility sectors, iota's representation amongst your mix. What is it about the platform that makes it more conducive? And how do we think about diversifying more broadly across verticals?

I think it's important to note that the platform is – the C3 AI platform is architected to actually be applicable to any industry sector and space. And so there's nothing specific about those industries. It was just that that's where we started. So about 12 years ago, we knew it was important not to just build an application platform without actually understanding the applications that would be built on top of it.

So we just picked an industry, happened to be energy. And the reason we picked that was because they were at the early stages of the smart grid. So this is spending trillions on sensorization of the power grid, everything from smart meters at the customer end through to all sorts of instrumentation on the grid, things called phasor measurement units that measure – take measurements about 120-hertz cycles through to generation, et cetera.

And so we cut our teeth on that problem, Brad, from the perspective of some of the deployments we had have basically tens of millions of endpoints with sensors generating data every 15 minutes, phasor measurement units, et cetera.

The rate at which we were ingesting data there is upwards of one million messages per second and then applying – unifying that data and doing things like forecasting how much load or consumption that is going to be at an endpoint, how much generation, which is critically important given the renewables.

But anyway, I digress application platform as applicable to manufacturing, utilities, oil and gas, defense, aerospace, intelligence, and it's built is a highly configurable model-driven application platform.

Great. I want to just pivot back to the public clouds and in particular, the school relationship that's evolved. It sounds like it's a really, really interesting development that keep potentially very fruitful. But as I zoom out, there's a lot of strong messages and capabilities from each of the respective clouds themselves in terms of AI and whether it's SageMaker from AWS or even the data warehousing capabilities of various work benches and things that they do.

Clearly, you do a lot more than that. But what – and I think Tom has always said that he views the major cloud providers as more partners, and we're seeing that with this Google development. But what's the risk that – I mean, competition has been the norm in software for decades, what's the risk that some of these vendors start to develop more AI orchestration capabilities and become more directly competitive?

Yes. Let me just back at the moment and just describe how the C3 platform actually uses the services from the platforms first and then talk about the view for how this industry will evolve. And so we basically take full advantage of the cloud services. And I obviously can be deployed in a private cloud behind the firewall.

You can deploy C3I on the edge and on servers. So we're fully multi-cloud hybrid edge deployable. And we take full advantages if you're running on the Google Cloud, we take full advantage of Vertex AI. We take advantage of big table, big query, all of the differentiated services that GCP has.

So we have an architecture that basically eventually calls and orchestrates these services provided by the cloud. So you can take that same application and move it from a private cloud on to GCP and will operate. You can take algorithms and move them on to the edge, and they will also fully operate.

And what we're doing is we're basically accelerating the time to value by reducing the amount of code and the complexity associated with these projects. And as this market evolves, it's similar to the enterprise application market, which is ultimately what people want our apps. And so you'll see the portfolio of applications that started off as a dozen applications is now 42 plus applications dramatically expand to the hundreds of applications that are applicable.

And so that's ultimately where this goes, Brad, is essentially just like SAP and the enterprise market today, people don't think about building from scratch an HR system or building from scratch god-forbid a general ledger system just go to SAP, and they get it.

And that's ultimately the journey that we're on. But the AI platform, importantly does provide multi-cloud capabilities so you can take the app and put it on any cloud or run it across clouds. And that's a very important capability that customers want today and well on tomorrow.

Got it. Ed, I've got a question for you. as we transition to more of a consumption model, how should we judge your progress? What are the metrics that you're going to use internally, to measure, to hold your deal organization accountable to. And ultimately, how should we, as investors continue to measure your success as you transition?

That's a great question. So yesterday, we provided a set of assumptions and some visuals to try to help the investing community see how are we thinking about this? And what are the key inputs that not only are we tracking, but we would propose or at least suggest that the investing community to keep on Ion as well. We get like what are the expected number of packets that each sales rep would close per year, which would be four in the numbers that we shared yesterday. So there's a bunch of details that everybody who's interested in investing in C3 AI should take a look at those and build their models accordingly, and that should really help out.

Is there any one particular North Star? Or I mean...

It's going to be the customers. The more we get customers, then the consumption will grow from there.

Yes, I should back up and explain this concept of a pilot, which might not be – the term is use in many different ways. But when we say pilot, we're basically talking about a six month project to take an application and actually deploy it into production. And so that involves taken the application installing it potentially in the customer's cloud environment. And then configuring it to plug in customer data from their systems in learning model and putting it into user acceptance testing and in the hands of users so they can start to get value out of it.

An example of that is might be C3 AI inventory optimization or it might be C3 AI demand forecasting, C3 AI CRM, C3 AI ESG, but it starts with one and basically getting that into production, production pilot in that period of time so that they can start to generate the value from that. In that same pilot period, actually, we're training the customer and partner developers on the platform so that they can actually develop additional applications in that first six months. And then from there, basically, they're on a consumption model where they're paying for consumption.

Yes, that's exactly right. So first six months target pace and then it goes to – pay as you go $0.55 per CPU hour.

Got it. We're almost out of time. Any questions from within the room? Anyone, anyone? I guess with that – yes, do you want to share the video?

We only got a couple of minutes left, but would be great.

Perfect. Let's play it in.

That's awesome. So these are real, really complex applications. It's not just a machine learning model or two, fielding some of these might require like two million machine learning models operating against the data that the company is streaming in from millions of sensors.

And this is an enterprise AI application. And a spot we should show one. That is not our video, by the way, it's from the U.S. Air Force, and that's how they promote their predictive AI predictive maintenance application. We'll leave it at that.

That's awesome. It helps to really bring the technology and its use to life. With that, we are out of time. Gentlemen, thank you so much. Really appreciate you being here. And look forward to having you again sometime soon. Thank you.

Thank you, Brad. Really appreciate the opportunity.