[AI Model Conversion] Snapdragon Neural Processing Engine SDK Tutorial

How do you put an AI model on Snapdragon?

With the development of AI (artificial intelligence) technology, AI functions are being implemented in a wide variety of embedded devices. In recent years, Qualcomm, which has also been focusing on SoCs for embedded devices, has released high-performance SoCs that are AI-aware, one after another, and the trend is only accelerating.

When incorporating an AI model created with a general framework into an SoC, development tools provided by each vendor may be required. Qualcomm has released a free software development kit called Snapdragon Neural Processing Engine SDK (SNPE) that supports Snapdragon. If you are interested in the SNPE SDK, please contact us for information on how to download it.



In this article, I would like to try converting an AI model using SNPE.

SNPE Workflow

The upper part shows the preparation (learning) phase of the model. The AI frameworks that support SNPE are TensorFlow/TensorFlow Lite/Caffe/Caffe2/ONNX/PyTorch, so prepare models created according to these frameworks. This time I am using a pretrained TensorFlow based model. The lower part shows the phase of conversion to a model compatible with Snapdragon and inference execution. In this article, we will try the conversion part of the model, which is surrounded by a red frame in the figure.

try to convert the model

testing environment

・Ubuntu 18.04
・Snapdragon Neural Processing Engine SDK v1.61

・Python 3.6.9

Install Python related tools

$ sudo apt update && sudo apt upgrade -y
$ sudo apt install python3-dev python3-pip python3-venv -y

Create a virtual environment with venv

$ python3 -m venv venv
$ source venv/bin/activate

Python package version used in venv environment

・protobuf==3.6.0
・matplotlib==3.0.3
・sphinx==2.2.1
・scipy==1.3.1
・Pillow==7.2
・numpy==1.16.5
・scikit-image==0.15.0
・tensorflow==1.11

Setting up the operating environment for SNPE

# 任意のパスにSNPE SDKのzipファイルを置く(SDKの入手方法はお問い合わせください) (venv) $ unzip -X snpe-1.61.0.zip (venv) $ export SNPE_ROOT=<snpe-1.61.0.zipを展開したパス> (venv) $ source $SNPE_ROOT/bin/dependencies.sh (venv) $ source $SNPE_ROOT/bin/check_python_depends.sh (venv) $ export TENSORFLOW_DIR=<TensorFlowをインストールしたパス> (venv) $ cd $SNPE_ROOT (venv) $ source bin/envsetup.sh -t $TENSORFLOW_DIR

Get the TensorFlow Model Garden repository

(venv) $ cd models && mkdir tfmodels && cd tfmodels
(venv) $ git clone https://github.com/tensorflow/models.git
(venv) $ cd models
(venv) $ git checkout ad386df597c069873ace235b931578671526ee00

Through PYTHONPATH

(venv) $ cd $SNPE_ROOT/models/tfmodels/models/research (venv) $ export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

Generating Python files with the Protocol Buffers compiler

(venv) $ wget https://github.com/protocolbuffers/protobuf/releases/download/v3.6.0/protoc-3.6.0-linux-x86_64.zip
(venv) $ unzip protoc-3.6.0-linux-x86_64.zip 
(venv) $ ./bin/protoc object_detection/protos/*.proto --python_out=.

Get Mobilenet-SSD model and convert to Inference Graph

(venv) $ cd $SNPE_ROOT/models (venv) $ mkdir mobilenet && cd mobilenet (venv) $ wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz (venv) $ tar -xzvf ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz (venv) $ mv ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03 saved_model (venv) $ cd saved_model (venv) $ mkdir exported (venv) $ cd $SNPE_ROOT/models/tfmodels/models/research/ (venv) $ INPUT_TYPE=image_tensor (venv) $ PIPELINE_CONFIG_PATH=$SNPE_ROOT/models/mobilenet/saved_model/pipeline.config (venv) $ TRAINED_CKPT_PREFIX=$SNPE_ROOT/models/mobilenet/saved_model/model.ckpt (venv) $ EXPORT_DIR=$SNPE_ROOT/models/mobilenet/saved_model/exported (venv) $ python object_detection/export_inference_graph.py --input_type=${INPUT_TYPE} --pipeline_config_path=${PIPELINE_CONFIG_PATH} --trained_checkpoint_prefix=${TRAINED_CKPT_PREFIX} --output_directory=${EXPORT_DIR}

Convert to DLC format

(venv) $ cd $SNPE_ROOT/models/mobilenet && mkdir dlc (venv) $ snpe-tensorflow-to-dlc --input_network saved_model/exported/frozen_inference_graph.pb --input_dim Preprocessor/sub 1,300,300,3 --out_node detection_classes --out_node detection_boxes --out_node detection_scores --output_path dlc/mobilenet_ssd.dlc

A successful conversion displays output similar to the following:

INFO - INFO_CONVERSION_SUCCESS: Conversion completed successfully

I was able to confirm that a dlc format file was generated in the path set as the model output destination. This completes the conversion of the AI model.

In the next and subsequent articles, I would like to try running the model generated this time on hardware equipped with Snapdragon.

Inquiry

If you would like more information about our products, please contact us here.

To Qualcomm manufacturer information Top