Highlights:

  • About 12 different Nvidia Corp. graphics processor units are accessible through CoreWeave’s public cloud.
  • A Kubernetes foundation powers CoreWeave’s cloud. It employs Knative, a Kubernetes extension created by Google LLC, to autonomously modify the hardware configuration.

CoreWeave Inc. raised USD 1.1 billion from the latest funding round. The company develops cloud platforms optimized for graphics card workloads.

The company is reportedly valued at USD 19 billion after the Series C investment. Its previous valuation was USD seven billion, which increased after a secondary sale in December for USD 642 million. Led by that deal, Fidelity Management also participated in the recent investment round that CoreWeave announced with Magnetar, Coatue, Lykos Global Management, and Altimeter Capital.

About 12 different Nvidia Corp. graphics processor units are accessible through CoreWeave’s public cloud. It focuses on two primary use cases: graphics rendering and artificial intelligence. According to CoreWeave, its platform enables users to operate these workloads more effectively and economically than well-known public clouds.

The company sells GPUs designed explicitly for AI tasks, including the H100. Other Nvidia chips, such as the A40, primarily targeted at computer graphics specialists, are also available in its cloud.

The A40 and the other rendering-optimized GPUs offered by CoreWeave do not have RT Cores, in contrast to their AI-optimized equivalents. These circuits are designed to work with ray tracing, a rendering method that mimics lighting effects like motion blur and shadows. The most realistic-looking pixel settings are found by projecting virtual light rays onto an object and observing how those rays reflect.

A Kubernetes foundation powers CoreWeave’s cloud. It employs Knative, a Kubernetes extension created by Google LLC, to autonomously modify the hardware configuration in customers’ environments in response to variations in application demand.

The so-called scale-to-zero method of Knative is one of its distinguishing characteristics. When not in use, companies typically have to leave some components of their GPU clusters operating rather than entirely shutting them down. These underutilized components cost extra since they eat up hardware resources. CoreWeave users can turn off every GPU in their clusters when they’re not in use because of Knative.

Building scale-to-zero GPU clusters has historically been challenging because it can take a while to reactivate graphics cards after turning them down. Consequently, this protracted boot process raises hardware expenses and causes user delays. To solve this problem, CoreWeave created a software tool known as Tensorizer.

Reloading the AI model that powers a GPU cluster requires time since the graphics cards need to be filled with new data each time. The most sophisticated AI models can take a while to load because they are large—many terabytes. Tensorizer, according to CoreWeave, accelerates productivity by incrementally bringing AI models into GPUs instead of all at once, as with other solutions.

In addition, the platform includes several other performance enhancements installed by the company.

CoreWeave uses GPUDirect RDMA, a network acceleration technique created by Nvidia, to manage data transfer across graphics cards. Network requests for a graphics card usually have to pass via the host server’s central processor unit and operating system. Data can reach GPUs more quickly due to GPUDirect RDMA’s omission of such pause points.

The business puts its graphics cards in servers made of bare metal. These machines don’t have a hypervisor installed, which spares the hardware overhead that comes with virtualization and frees up additional resources for workloads from customers.

Currently, 14 data centers across the United States—most of which were constructed in the last two years—host its infrastructure. It has been claimed that the company plans to build more cloud facilities in Europe with the capital raised from its recently disclosed investment round. In the long term, it intends to raise more money to support its expansion initiatives and expand its data center network into other areas.