Highlights:

  • To accelerate and simplify the creation of robotics processes, Omniverse Cloud on Azure gives users access to robust Nvidia OVX servers created especially for robotics simulations.
  • A wide variety of AI robotics, machines, and automation capabilities are supported by the Nvidia Jetson Orin family of modules at the network edge.

Nvidia Corp. is extending its tools regarding robotics and the artificial intelligence that powers them. To train and deploy autonomous machines in factories, offices, and cities, developers and engineers use platforms that have been improved by Nvidia.

Nvidia announced that Omniverse Cloud would be hosted on Microsoft Azure, increasing access to Isaac Sim, the company’s virtual training environment for creating and administering AI-based robots, at GTC 2023, the company’s virtual developer conference. A full production line of Jetson Orin-based modules, which are potent AI-robot platforms and basically robot “brains” that serve as edge AI computing platforms, was also announced by the company.

Jensen Huang, Nvidia Founder and Chief Executive, said, “The world’s largest industries make physical things, but they want to build them digitally. Omniverse is a platform for industrial digitization that bridges digital and physical.”

Isaac Sim is powered by Nvidia Omniverse, a potent metaverse simulation tool that builds digital twins of the real world so that robot makers can model the environments and circumstances in which their creations will function. Real-world robot construction and management requires ingesting and building enormous datasets from inception, which can be very taxing.

To accelerate and simplify the creation of robotics processes, Omniverse Cloud on Azure gives users access to robust Nvidia OVX servers created especially for robotics simulations. It also includes a sizable number of tools that make it easier for teams to work together on training environments as well as simulate, validate, and implement AI.

With the new Jetson Orin Nano Developer Kit, Nvidia unveiled the most recent in its line-up of edge compute AI systems in the Jetson Orin module lineup, bridging the gap between the physical and the digital. The new kit is intended for developers looking to build AI-powered robots, intelligent vision systems, smart drones, and more. It is significantly more powerful than the previous version of Jetson Nano.

A wide variety of AI robotics, machines, and automation capabilities are supported by the Nvidia Jetson Orin family of modules at the network edge. They support a broad range of AI models for all applications and are constructed on Nvidia Ampere chips. The Jetson Orin Nano, an entry-level Orin-based chip, can run 40 trillion AI operations per second, while the Jetson AGX Orin can run 275 trillion operations per second for complex solutions like autonomous vehicles.

More than 6,000 companies and 1 million coders, including Teradyne Inc., Cisco Systems Inc., TK Elevator, Canon Inc., Hyundai Robotics Co. Ltd., Amazon Web Services, and John Deere, are using Jetson, according to Nvidia. Hyundai Doosan Infracore Co. Ltd., Verdant Robotics, and the drone businesses Skydio Inc. and Zipline Inc. are among the firms implementing the new Orin-based modules.

Together with updates to the Deepstream software development kit for creating computer vision apps, Nvidia announced expansions to the Metropolis ecosystem and technology supporting computer vision AI. It also provided early access to Metropolis Microservices.

A low-code AI framework development kit called the TAO Toolkit helps any developer quickly create AI models for any application and any device. Pre-trained models for vision transformers, the ability to install on any platform with ONNX export, automatic tuning with AutoML, and AI-assisted data classification or annotation are just a few of the new features that Nvidia is introducing with TAO 5.0.

Developers can quickly browse data using TAO with little to no coding required, and it will quickly categorize what it observes. It can train and categorize visuals for tasks like people detection, vehicle classification, pose estimation, and object estimation using pre-trained models. Before models are prepared for industry integration and implementation, they can be pruned and improved upon through iteration until they function as engineers require.

With updates that aid developers in creating next-generation vision AI, Nvidia DeepStream software development kit is taking things to the next level. The pipeline-based, open-source GStreamer framework’s video streams can be quickly converted into computer vision AI using DeepStream. The most recent version, however, enables programmers to build their own streaming framework-free low-code graph-based AI vision pipelines.

DeepStream can now include data other than just images. With the help of sensor fusion, it can also consider data from additional instruments like lidar and radar as well as environmental and process data. This will open the door for a whole new range of potential computer vision-based AI applications across industries, such as using it for quality control and autonomous machines that need to recognize strict scheduling requirements or other changes in the environment.

Finally, vision AI may run into issues when there are numerous cameras spread out over a large region, but Nvidia has a solution for that. A cloud-native microservices reference framework for vision AI apps is known as Metropolis Microservices. To create multicamera tracking apps, it enables developers to quickly spread perception over a large area using multiple cameras and fuse them together into a single understanding for the AI to use.

This has a wide range of uses, including on factory floors with many cameras watching products move along numerous conveyors, in stadiums with people walking down hallways for traffic control, in retail stores with cameras on shelves for inventory control, and in smart cities trying to better understand traffic.