Site Search

[Edge AI made possible by NVIDIA x NXP]

Episode 1 Process overview

Episode 2 Creating a dataset (NVIDIA Omniverse)

Episode 3 AI model learning (NVIDIA TAO Toolkit)

Episode 4: Deployment and Inference on Edge Devices (i.MX 8M Plus EVK)

In the previous episode, we introduced the entire process from model training to deployment for edge AI using NVIDIA Omniverse™ and the NVIDIA TAO Toolkit.

 

In part 2, we will introduce the detailed steps we took to create synthetic data using NVIDIA Omniverse.

By utilizing the NVIDIA Omniverse platform, it is possible to automatically generate dataset images in a photorealistic virtual space without having to take images at the actual site.

In addition, the TAO Toolkit allows you to try out the AI development process from transfer learning to checking the performance of the inference model multiple times in a short period of time, and is expected to significantly improve the time it takes to generate additional datasets and to customize the AI model to achieve the desired performance. We hope you will use this article as a reference and give it a try.

What is NVIDIA Omniverse Replicator?

NVIDIA Omniverse Replicator is a synthetic data generation framework that enables the generation of datasets used for training AI models in virtual spaces built on the Omniverse platform.

For more information, see the NVIDIA Omniverse Replicator™ article.

Synthetic data creation procedure

1. Prepare 3D assets

First, prepare a 3D asset for the doll.

We downloaded free 3D assets from CGTrader.

*The processing of the above assets is described in the appendix. Please refer to it if you are interested.

2. Setting labels

Set labels for the objects you want to detect.

1. Import the doll asset into NVIDIA Isaac Sim 4.2.

*Isaac Sim is an application tool that performs robot simulation based on Omniverse. For more information, please see NVIDIA Isaac™.

2. Setting labels

2-1. Select the doll asset and enter the following in Apply semantic data on selected objects in the Semantic Schema Editor tab.

    New Semantic Type:class

    New Semantic Data:figure

2-2. Click Add Entry On All Selected Prims.

To check that the labels have been set, check BoundingBox2DTight in the Sensors Output window and click Show Window.

If a 2D bounding Box is displayed as shown below, the label setting is complete.

3. Script execution

Use a Python script to create RGB images and training data for training.

This time, we randomly place doll assets and save the images taken by the camera and the bounding Box annotations.

This allows us to generate large amounts of automatically labeled synthetic data.

This time, we created the script below to take 500 images and create annotation data.

Script Explanation

Camera placement

-This time, the actual camera placement is intended to take pictures from above the doll, so the cameras are placed randomly above the doll.

・In order to capture the doll asset, the camera is positioned so that it always faces the origin.

 

Asset Placement

-Store the labeled doll asset in the variable figure and manipulate it.

- Randomize the direction (z-axis rotation) and position of doll assets.

* Make sure that doll assets do not collide with each other when randomly placing them.

 

Measures to improve model accuracy

-Random placement of distractors (assets such as empty cans and scissors).

*The doll assets and distractors are positioned so that they do not collide with each other.

- Switch between different background images.

*This initiative was based on the link below.

 How to Train Autonomous Mobile Robots to Detect Warehouse Pallet Jacks Using Synthetic Data | NVIDIA Technical Blog

Paste the following script into the Script editor and run it:

Click Window > Script Editor, copy and paste the above script (replicator_py) into the Script Editor, and click Replicator > Start to start image capture.

You can confirm the RGB image and the bounding Box annotation data (npy file/json file) in the omni.replicator_out/figure folder. This completes the creation of the synthetic data.

Created synthetic data image

Appendix

Processing the doll assets

For your reference, I edited the assets I downloaded using CGTrader.

This can also be done using GUI operations, but since this is a bit complicated, we have compiled it into a script.

Please run the following scripts in the Script Editor in order.

1. Double-click the .obj file in the downloaded folder.

2. Create a light, delete unnecessary assets, and edit the position and size

3. Adding and renaming Xform

4. Coloring, pasting images, and moving material prims

5. Copy the doll asset

This completes the asset editing process.

You can start from [2. Setting labels] in the synthetic data creation procedure.

Episode 3 explains the AI model learning procedure

In this article, we introduced the steps to actually create a dataset in the 3D virtual space of NVIDIA Omniverse.

This time we have introduced some examples of program code, but by referring to information published on manufacturer websites, you can write programs to realize a wide variety of shooting scenes.

In the next episode, Part 3, we will explain the training procedure for an AI model using the synthetic dataset we created and the NVIDIA TAO Toolkit.

If you are considering Omniverse, TAO Toolkit or Edge AI, please contact us.

Related information