Nvidia has struck a deal to acquire Run:ai, a Tel Aviv-based company that makes it easier for developers and operations teams to manage and optimize their AI hardware infrastructure, for an undisclosed sum.
However, a source close to the matter stated that the price tag was $700 million.
Earlier reports suggested that the companies were in “advanced negotiations” that could see Nvidia pay upwards of $1 billion for Run:ai. The negotiations went off without a hitch, apart from a possible price change.
Nvidia announced its commitment to maintaining and enhancing Run:ai’s products within its ecosystem.
Under this agreement, Nvidia pledges to uphold Run:ai’s existing business model and contribute to its product development roadmap.
This collaboration falls under Nvidia’s DGX Cloud AI platform, designed to provide enterprise customers with access to essential computing infrastructure and software for training various AI models.
Customers utilizing Nvidia’s DGX server, workstation, and DGX Cloud services will now have access to the capabilities offered by Run:ai.
This integration is particularly beneficial for users engaged in generative AI projects that span across multiple data center locations. With Nvidia’s support, Run:ai’s solutions will be more readily available and optimized for a wide range of AI workloads.
“Run:ai has been a close collaborator with Nvidia since 2020 and we share a passion for helping our customers make the most of their infrastructure,” Omri Geller, Run:ai’s CEO, said in a statement.
“We’re thrilled to join Nvidia and look forward to continuing our journey together.”
Founders’ Vision and Run:ai’s Rapid Rise
Run:ai was founded by Geller and Ronen Dar, who had previously studied together at Tel Aviv University under the guidance of professor Meir Feder, who later joined them as the third co-founder.
Their shared vision was to create a platform capable of optimizing AI models by distributing their computations across various hardware resources, whether located on-premises, in public cloud environments, or at the edge.
Although Run:ai faces limited direct competition in its niche, the concept of dynamically allocating hardware resources for AI workloads is gaining traction among other companies.
One such competitor is Grid.ai, which provides software enabling data scientists to train AI models concurrently across multiple GPUs, processors, and other hardware components.
Despite its relatively short existence, Run:ai quickly amassed a significant customer base comprised of Fortune 500 companies, a feat that caught the attention of venture capital investors.
Before its acquisition, the company had secured an impressive $118 million in funding from notable backers such as Insight Partners, Tiger Global, S Capital, and TLV Partners.
Alexis Bjorlin, Nvidia’s Vice President of DGX Cloud, highlighted in a blog post the increasing complexity of customer AI deployments and the growing demand among businesses to optimize their utilization of AI computing resources.
According to a recent survey conducted by ClearML, a company specializing in machine learning model management, organizations adopting AI technologies in 2024 are facing significant challenges.
The primary obstacles reported include limitations in computing resources due to availability and cost, followed closely by infrastructure-related issues.
“Managing and orchestrating generative AI, recommender systems, search engines, and other workloads requires sophisticated scheduling to optimize performance at the system level and on the underlying infrastructure.
“Nvidia’s accelerated computing platform and Run:ai’s platform will continue to support a broad ecosystem of third-party solutions, giving customers choice and flexibility. Together with Run:ai, Nvidia will enable customers to have a single fabric that accesses GPU solutions anywhere.”
Alexis Bjorlin
Run:ai is among Nvidia’s biggest acquisitions since its purchase of Mellanox for $6.9 billion in March 2019.
READ ALSO: SML Saga: IMANI Demands Full Transparency