Programming with BMNNSDK
There are two ways to program with runtime library:
BMNet
BMKernel.
Programming by BMNet
We provide multiple utility tools to convert CAFFE models into machine instructions. These instructions, as well as model’s weights, would be packed into a file named bmodel (model file for BITMAIN targets), which can be executed in BITMAIN board directly. BMNet has implemented many common layers, the full list of build-in layers is in below table, and many more layers are in developing:
Activation
BatchNorm
Concat
Convolution
Eltwise
Flatten
InnerProduct
Join
LRN
Normalize
Permute
Pooling
PReLU
PriorBox
Reorg
Reshape
Scale
Split
Upsample
Programming flow as follow :
BMNet takes CAFFE framework generated caffemodel and deploy file deploy.prototxt as input. After processing in stages such as front end, optimizer and back end, bmodel file can be generated.
$ bm_builder.bin \
–t bm1880 \
-n googlenet \
-c /data/bmnet_models/googlenet/googlenet.caffemodel \
-m /data/bmnet_models/googlenet/googlenet_deploy.prototxt \
--enable-layer-group=yes \
-s 1,3,224,224 \
-o bmnet/out/googlenet_1_3_224_224.bmodelIf layers of your network model are all supported in BMNet, it is very convenient to use command line to compile the network, otherwise you can refer to BMKernel model.
Programming by BMKernel
If programming by kernel, then call bmruntime_bmkernel_create() function to create a BMkernel. After BMkernel is created, applications can use BMkernel interfaces to generate kernel commands, and then submit the commands by bmruntime_bmkernel_submit(). At last, bmruntime_bmkernel_destroy() should be called to release the kernel resources. Programming flow chart as follow :

BMNET provides a serials API to add customized layers without modifying the BMNet core code. Customized layer could be a pure new layer or could be instead of original caffe layer in bmnet. Below tutorial will guide through the steps to create a simple custom layer (use LeakyRelu layer as an example, source code could be found in bmnet/example/customized_layer_1880/) instead of original caffe layer in BMNet.
Add new caffe layer definition
Modify the bmnet_caffe.proto in path “bmnet/examples/customized_layer_1880/proto”. Firstly, you need to check whether the layer exist or not. If it exists skip this step, if it doesn’t exist please append a new line at the end of LayerParameter with a new index and add definition of LayerParameter.
Add new CAFFE layer class
Create a child class that inherited from CustomizedCaffeLayer, and implement layer_name(), dump(), codegen() member methods :
layer_name(): needs to return the string name of layer type.
setup(): option. Only support to set set_sub_type if necessay. if not implement set_sub_type = layer type.
dump(): dump the parameter’s details of new added CAFFE layer in this function.
codegen(): convert parameters of CAFFE layer to tg_customized_param, which is param- eter of customized IR.
Add new Tensor Instruction class
Create a child class that inherited from CustomizedTensorFixedInst, a class to convert IR to instructions, and implement inst_name(), dump(), encode() member functions:
inst_name(): needs to return IR name, lowercase with prefix “tg” + subtype, sub_type is set at 6.2.
dump(): dump tg_customized_param’s details of IR op.
encode(): convert IR to instructions. If the IR could be deployed to NPU,
please use BMKernel api to implement it, or you can just implement a pure CPU version used c++ language.
NPU Version
If the IR could be deployed to NPU, please use BMKernel APIs to implement the function encode(). More details about BMKernel APIs, please refer to related document
CPU Version
If the IR could only be converted using CPU, please add a new cpu op, and store IR op_ to it:
Navigate to the cpu_op folder, and create a new cpp source file, the name of which should be same as type name of customized layer. In the file, you need to create a child class that inherited from CpuOp, and implement run() member method with c++ code. Finally, please register the new class with REGISTER_CPU_OP().
In order to compile the new added source file, please add it the CMakeLists.txt in the same folder.
Programming application
Introduction to development environment
We provide a docker development image for users, it includes tools and dependent libraries that required for BMNNSDK application development, and users can use it to develop the BMNNSDK application.
Docker development image does not contain the BMNNSDK, please import the BMNNSDK to Docker development image for development before you use it.
Use the development environment
Please make sure you have installed the BMNNSDK before you use the docker development environment, and then import it to the docker development environment.
The example for compiling the usb mode
Afer entering the docker container, the example for compiling usb mode(the command executed in the container, please use user@workspace$)
The example for compiling SoC mode
Unzip the BMNNSDK compression package of SoC mode, import it to the docker development image, and run the docker development image.
Afer entering the docker container, the example for compiling SoC mode.
Running the sample code
Code will be generated in the local directory:
Deploy the code to the deployment environment, and run it. For USB mode, you can deploy it to a PC installed with the BM1880 development board. For SoC mode, you can deploy it to the BM1880 SoC development board via SD card, Ethernet, or packaged file system.
Running
The API of BMNet inference engine are needed for programming. Programming flow chart as follow :

Example code as follows:
Last updated