Highlights

  • NVIDIA Fleet Command now includes features that enhance the seamless management of edge AI deployments around the world.
  • Nvidia Fleet Command provides a managed platform for Kubernetes-based container orchestration that makes it very simple to build and deploy AI systems and apps in countless distributed locations.

Nvidia already enjoys a global reputation for producing top-tier graphics processing units (GPUs) that generate images, video, and 2D or 3D animations for display and is ranked No. 1 in terms of market share. It recently entered the IT industry using its success as a springboard, but without producing any hardware.

One year after the business introduced Nvidia Fleet Command, a cloud-based service for deploying, operating, and scaling AI applications at the edge, it unveiled new features that help deal with the separation between these servers by enhancing the management of edge AI installations globally.

Instead of sending data to a centralized cloud or data center for processing, edge computing is a distributed computing system with its own set of resources that enables data to be processed close to its source. Edge computing shortens the time it takes to transfer data back and forth, thus accelerating analysis. Fleet Command can control such deployments with the help of its cloud interface.

“In the world of AI, distance is not the friend of many IT managers,” Nvidia product marketing manager Troy Estes posted. “Unlike data centers, where resources and personnel are consolidated, enterprises deploying AI applications at the edge need to consider how to manage the extreme nature of edge environments.”

Reducing latency in remote deployments

Often, the nodes that bridge data centers or clouds and remote AI deployment are difficult to make fast enough to use in a production environment. Given the massive volume of data that AI applications consume, it calls for a highly performant network and data management to make these deployments function well enough to satisfy service-level agreements.

“You can run AI in the cloud,” said Nvidia senior manager of AI video Amanda Saunders. “But typically, the latency that it takes to send stuff back and forth – well, a lot of these locations don’t have strong network connections; they may seem to be connected, but they’re not always connected. Fleet Command allows you to deploy those applications to the edge but still maintain that control over them so that you’re able to remotely access not just the system but the actual application itself, so you can see everything that’s going on.”

Given the scale of some edge AI implementations, firms may have thousands of autonomous sites that IT teams must maintain. These occasionally need to be operated in exceedingly remote places, including oil rigs, weather stations, dispersed retail outlets, or industrial buildings.

Saunders said Nvidia Fleet Command provides a managed platform for Kubernetes-based container orchestration that makes it very simple to build and deploy AI systems and apps in countless distributed locations, all from a single cloud-based console.

Optimizing connections also part of the task

Deployment is just the first step toward managing AI applications at the edge. According to Estes, optimizing these apps is a continual process that includes deploying new applications, patching existing ones, and restarting edge systems. These workflows will operate in a controlled environment thanks to the new Fleet Command features, which include:

  • Advanced remote management: Access controls and timed sessions have been added to Fleet Command’s remote management, removing the vulnerabilities associated with conventional VPN connections. From the comfort of their offices, administrators can safely keep an eye on activities and troubleshoot problems at remote edge locations. Since edge environments are very dynamic, administrators in charge of edge AI deployments must also be dynamic to keep up with quick changes and minimize deployment downtime. Because of this, remote management is an essential component of every edge AI deployment.
  • Multi-instance GPU (MIG) provisioning: Fleet Command now supports MIG, allowing administrators to divide GPUs and assign programs from the Fleet Command graphical user interface. MIG helps businesses to right-size their deployments and make the most of their edge hardware by letting them run numerous AI apps on the same GPU.

According to Astute Analytics, the market for edge AI software management will grow to USD 8.05 billion by 2027. Along with other companies, including Juniper Networks, VMWare, Cloudera, IBM, and Dell Technologies, Nvidia is a competitor in the market.