Q3 2024 DigitalOcean Holdings Inc Earnings Call

In This Article:

Participants

Melanie Strate; Head of Investor Relations; DigitalOcean Holdings Inc

Paddy Srinivasan; Chief Executive Officer, Director; DigitalOcean Holdings Inc

W. Matthew Steinfort; Chief Financial Officer, Interim Chief Accounting Officer; DigitalOcean Holdings Inc

Raimo Lenschow; Analyst; Barclays

Mike Cikos; Analyst; Needham & Company, Inc.

James Fish; Analyst; Piper Sandler & Co.

Gabriela Borges; Analyst; Goldman Sachs & Company, Inc.

Jeff Hickey; Analyst; UBS

Josh Baer; Analyst; Morgan Stanley & Co. LLC

Pinjalim Bora; Analyst; JP Morgan

Presentation

Operator

Thank you for standing by and welcome to the DigitalOcean third-quarter, 2024 earnings conference call (Operator Instructions)
Thank you. I'd now like to turn the call over to Melanie Strate, Head of Investor Relations. You may begin.

Melanie Strate

Thank you, and good morning. Thank you all for joining us today to review DigitalOcean's third-quarter, 2024 financial results. Joining me on the call today are Paddy Srinivasan, our Chief Executive Officer; and Matt Steinfort, our Chief Financial Officer. After our prepared remarks, we will open the call up to a question-answer-session before we begin. Let me remind you that certain statements made on the call today may be considered forward-looking statements which reflect management's best judgment based on currently available information.
I refer specifically to the discussion of our expectations and beliefs regarding our financial outlook for the fourth-quarter and full year 2024 as well as our business goals and outlook. Our actual results may differ materially from those projected in these forward-looking statements. I direct your attention to the risk factors contained in our filings with the securities and exchange commission and those referenced in today's press release that is posted on our website.
Digitation, expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today.
Additionally, non-GAAP financial measures will be measured on this conference call and reconciliation to the most directly comparable GAAP. Financial measures are also available in today's press release as well as in our investor presentation that outlines the financial discussion on today's call. A webcast of today's call is also available in the IR section of our website.
And with that, I will turn the call over to Paddy.

Paddy Srinivasan

Thank you, Melanie. Good morning everyone and thank you for joining us today as we review our third-quarter, 2024 results, DigitalOcean had a successful third quarter, continuing to deliver progress on our key metrics and executing on the initiatives we laid out earlier in the year. Further establishing ourselves as the simplest scalable cloud.
In my remarks today, I will briefly highlight our third-quarter results share tangible examples of how our increased pace of innovation is benefiting our customers. Discuss the continued momentum. We are seeing with our AI platform and give an update on our strategic partnerships and engagement with the developer ecosystem.
First, I would like to briefly recap our third-quarter 2024 financial results, revenue growth remained steady in the third quarter at 12% year over year with solid performance in core cloud and continued growth in AI. Despite lapping difficult comps from our managed hosting price increase in April 2023 and from the paper space acquisition in July 2023, we continue to see momentum and demand for our AI/ML products where Q3 ARR again grew close to 200% year over year.
In addition, we saw revenue growth contributions from new customers and steady growth from our core business as we continue to enhance our customer success and go to market motions having delivered strong results through the first three quarters, we are increasing the lower end of our full year revenue guide by $5 million and the top end by $2 million.
We continue to focus the majority of our product innovation and go to market investments on our builders and scalars who drive 88% of our total revenue and are growing 15% year-over-year ahead of our overall 12% revenue growth.
We also delivered strong adjusted EBITDA margins at 44% and have maintained our full year free cash flow margin guidance as we continue to manage costs effectively while still investing to accelerate product innovation in cloud and AI, Matt will walk you through more details on our financial results and guidance. later in this call, let me start by giving you an update on our core cloud computing platform.
In Q3, we continued our increased product velocity specifically focused on the needs of our largest and fastest growing customer cohort the 17,000 plus scalars that drive 58% of our total revenue and that grew 19% year-over-year in the quarter.
In Q3, we released 42 new product features in total, which is almost double what we delivered in the previous quarter.
We are accelerating features that will benefit our existing and potential scalars that are on other hyper-scalar clouds today.
Let me now provide a few highlights from these efforts that are specifically focused on the needs of these larger workloads.
We announced the early availability of Virtual Private Cloud peering or VPC peering for short that gives customers the ability to connect two different VPCs on the DigitalOcean platform within a data center or between different data centers, through VPC peering, customers can create strong data isolation and privacy via direct and secure networking between resources that doesn't expose traffic to the public internet.
Our Global Load Balancer or GLB is now generally available for all of our customers. GLB offers global traffic distribution based on geographical proximity of the end user, enabling lower latency services, dynamic multiregional traffic failover, enabling more service availability for our customers' applications, data center, prioritization, edge caching and automatic scaling of the load balancers.
We are thrilled to be able to roll it out to all of our customers, particularly to scalar customers with existing multinational deployments that will benefit directly from this new product.
During the third quarter, we progressed daily backups from early availability to general availability giving our customers the additional flexibility to manage backups at a daily and weekly cadence.
This enables increased protection for our customers' workloads. As with daily backups, we automatically retain the seven most recent backup copies.
This was an explicit need given the large volume and growth of data we are seeing on our platform with our spaces object storage footprint growing 50% year-over-year.
We are also launching larger droplet configurations including 48 vCPU memory and storage optimized droplets, 60 vCPU optimized in general purpose droplets and larger 7 terabytes and 10 terabytes disk density variance droplets.
These large drop configurations are particularly relevant to our scalar customers who can quickly scale up their workloads that require more CPU memory or storage versus horizontally scaling out with multiple nodes.
In September, we announced Kubernetes log forwarding which also enables Kubernetes customers centralized log management, simplifying the monitoring and troubleshooting of their applications in the DigitalOcean platform.
This was built with simplicity in mind with just a few clicks from the Kubernetes settings panel. Customers can easily forward cluster event logs from Kubernetes directly to the DigitalOcean managed open search. For further analysis, we also enhanced application security for our cloud based managed hosting product by introducing a new malware protection solution and saw 3,650 net activations within the first week to date, we have seen near zero false positives or false negative rates from our malware detection.
This malware protection capability is now one of the fastest growing revenue generating product modules we have seen on our managed hosting platform.
All these innovations are not only helping us meet the needs of our large customers but also helping us move customers with these larger workloads from purely usage based to committed contracts. For example, an existing cybersecurity customer of ours cyber, a leader in threat intelligence sign the multi-year seven figure commitment in this quarter.
The decision to continue leveraging DigitalOcean and sign a multi-year deal was driven by the release of our new large premium CPU optimized droplets that helps customers run computationally heavy workloads.
Cyber is a petabyte scale company and after several weeks of diligence, they chose DigitalOcean for this new workload due to our scalability coverage and cost efficiency.
Another great example is Traject data who signed a multi-year commitment for a broad portfolio of DigitalOcean services including over 500 droplets managed MongoDB, Spaces, Backups and Volumes.
Traject data requires robust scalable and reliable infrastructure to power their real time, clean and bulk process data insights serving domains including marketing, retail and analytics.
They use the DigitalOcean platform to host their APIs and manage vast amounts of search engine results, page and e-commerce data to deliver critical insights to their customers.
These product innovations and enhance customer engagement is also helping customers migrate workloads to DigitalOcean from the hyper-scalars.
One specific example is PiCap, a leading ride sharing and logistics company based in Latin America, operating in Mexico, Brazil, Peru and Colombia. And they moved all of their workloads from various clouds to DigitalOcean. In the third quarter, they migrated to DO due to the simplicity of our products, transparent and simple pricing model and strong support from our customer facing teams.
Another example is no-bid. A customer specializing in optimizing ad revenue for online publishers through real time bidding technology upon technical validation of the DO platform scale, they moved most of their large scale production applications from a hyper-scalar to the DO platform reinforcing our opportunity to increase our share of wallet with our scalar customers.
Next, let me provide some updates on the AI/ML site. Our AI strategy reflects our belief that the AI market will evolve in a similar fashion to other major technology transformations with initial progress and monetization at the infrastructure layer which will eventually be eclipsed by the opportunities and value creation of the platform and application layers, like others in the market.
Today, we are actively participating in the infrastructure layer, but we are also innovating rapidly in the platform and application pillars to make it easy for our customers to use GenAI at scale without requiring deep AI/ML expertise.
This is where we see our differentiation as our customers seek to consume AI through platforms and agents rather than building everything themselves using raw GPU infrastructure.
At the infrastructure layer, we made GPU droplets accelerated by NVIDIA H100 Tensor Core GPUs generally available to all of our customers.
Now, all DigitalOcean customers can leverage on demand and fractional access to GPUS, which is a critical step in achieving our overarching mission of democratizing AI for all customers.
In Q3, we also announced the early availability of NVIDIA H100 Tensor Core GPU worker nodes on the DigitalOcean kubernetes platform or docs for short. Providing customers with a managed experience with GPU nodes ready with NVIDIA drivers, NVIDIA link fabric manager and NVIDIA container toolkit.
Customers can take advantage of the NVIDIA GPU operator and NVIDIA Mellanox network operator to install a comprehensive suite of tools required for production deployment, both GPU droplets and the H100 GPU nodes on docs are examples of how we are innovating even in the infrastructure layer making it simpler for customers.
Let me give you an example. Kalian Exchange is a paytech company that specializes in providing enterprise Blockchain based solution for bank payments and they are leveraging DigitalOcean's H100 infrastructure to accelerate the processing of high volume financial transactions by providing advanced computational power.
They use machine learning models to detect fraud in real time as such risk and ensure that payments are processed securely and quickly.
The GPU infrastructure allows them to process more transactions while maintaining low latency and improving the overall user experience for both banks and end customers.
Next. At the platform layer. In this quarter, we launched the early availability of our new GenAI platform to select customers so that we can iterate with them and shape the product and make it easy for them to build GenAI applications that deliver real business value.
Users of this product will be able to combine their data with the power of foundational models to create personalized AI agents to integrate with their applications in just a few minutes, customers can leverage our platform to create AI applications with foundational models and agent, routing knowledge base and retrieval, augmented generation or rag.
This is a key step towards our software centric AI strategy which is aimed at enabling customers to derive business value from AI in a friction free manner.
An example of a customer that is already leveraging our GenAI platform is autonoma cloud, a planned digitization company that offers a platform for manufacturing plants and machine manufacturers.
Autonoma cloud creates and manages large volumes of documentation and data for each of their customers plans and individual machines. And we're looking to create AI agents that understood their user specific context and retrieve answers and machine specific data to their queries.
With geo's new GenAI platform, they quickly built an interactive experience with their custom data and that reduces the cognitive overhead for its users.
It is very important to note that these companies are not just doing internal proof of concepts or R&D projects but are now starting to leverage our AI/ML products to build AI into their own products to deliver real business value to their customers without requiring deep expertise in AI machine learning, data science or data engineering.
Finally, let me talk about the third pillar of our AI strategy, the application or agentic layer. As I just talked about, our customers are using our GenAI platform to create their own AI driven agents.
In addition to that, we're also innovating on this front by further simplifying cloud computing using AI and automating workflows that were previously done by humans.
One of the frequent pain points for our customers is debugging their cloud applications when something goes wrong because one, it is a very complex set of technical tasks. And number two, they typically don't have specialized site, reliability engineers or SREs available in their staff to perform these complex tasks.
So we set out to mitigate this pain point for our customers using GenAI by building a new AI agent to perform some of these tasks that are typically done by human SREs.
We're using this AI SRE agent both internally on our systems and externally by integrating it with our cloud based product. Let me explain internally, we are using the AI SRE agent to help our human SREs troubleshoot ongoing technical incidents in the DO cloud platform.
Based on our internal initial internal data, the AI SRE agent is reducing the time it takes to identify root causes by almost 35% by leveraging AI to quickly process an enormous amount of log data from desperate systems to pinpoint root causes and make next step decisions including recommendations to fix this or underlying problems.
Externally, we integrated this AI SRE agent into our cloud based product which hosts hundreds of thousands of mission critical websites. Today, when issues happen on customer service and applications, they have to work with support engineers to debug the root cause and then apply a fix. This is true, not just for the digital ocean platform but across all managed hosting platforms.
This can be a time consuming job during which their business and even websites can be affected. If not offline.
Our new AI SRE agent jumps into action upon detection of any performance degradation due to common issues like aggressive bot crawlers, denial of service attacks and so forth to investigate and gather insights and provide recommendations real time on how to fix these issues, thereby reducing the time to resolution significantly.
Our testing results are very encouraging and we have just started working with a few customers in early availability mode.
Rounding out our AI strategy. We opened up a new front door by launching a strategic partnership with Hugging Face in Q3. Hugging Face is the leading open source and open science platform that helps users build, deploy and train machine learning models.
As a result of this partnership, DigitalOcean now offers model inferencing through one click deployable models on GPU droplets allowing users to quickly and easily deploy the most popular third party models. With the simplicity of GPU droplets, an optimal performance accelerated by NVIDIA H100 tensor core GPUs.
This offering simplifies the deployment complexity of the most popular open source AI/ML models as DigitalOcean is natively integrated and optimized these models for GPU droplets enabling fast deployment and superior performance.
The Hugging Face partnership will make it easier for the more than 1.2 million Hugging Face users to discover and use the DigitalOcean platform.
In Q3, we also announced a new partnership with Netlify, a leading web development platform to enable customers seamlessly connect their Netlify applications to digital oceans managed MongoDB offering developers all the right tools to build and scale their applications without the complexities of managing infrastructure.
These announcements, in addition to the various other partnerships we already have in flight, highlight our efforts to augment our durable product led growth motion with additional channels including new front doors through partnerships with leading players in our ecosystem that will also help shape and improve our product offerings.
I'm also excited to highlight the material progress we are making with our renewed engagement with the developer community. In October, we hosted the 11th edition of Oktoberfest, which has now evolved from being an internal hackathon event at DigitalOcean to one of the largest and premier open source community events.
This year, over 65,000 developers from 172 countries participated in more than 115 community run events and contributed to 15,000 open source projects, beyond October fest. We also hosted more than 10 DigitalOcean meet-ups for developers in the AI/ML community and participated in a number of industry conferences. This broad based community engagement effort reinforces DigitalOcean's ongoing community to our developer ecosystem.
In closing, I am encouraged by the progress on product innovation and customer engagement particularly as it is helping our builder and scalar customers continue to grow on our platform as their businesses expand.
We're also making great strides towards our software centric AI vision by rapidly shipping products in each of the three layers infrastructure platform and applications.
We're starting to see the green shoots from these investments in the form of customer wins including cloud migrations from the hyper-scalars multi-year commitment contracts and real world deployment of AI using the DO AI platform.
We will continue to focus on our largest and fastest growing customer cohorts as we seek to accelerate growth in the quarters to come.
Before I turn the call over to Matt. I'm very excited to share that. We will be hosting an Investor day in New York City and we are currently targeting late March or early calendar Q2 2025, in which we will share more on our long term strategy, including more detail on our progress and metrics as well as the view of our long term financial outlook.
I will now hand the call over to Matt Steinfort, our CFO, who will now provide some additional details on our financial results and our outlook for Q4 2024. Thank you.