TensorOp represents a BMNET IR, which is a bridge between front end and back end. it provides lots of member method to set information to or get from it. Below is the prototype:
namespace bmnet {class TensorOp {public:int input_shape_size();int output_shape_size();const TensorShape& input_shape(int index);const TensorShape& output_shape(int index);TensorShape* add_output_shape();u64 global_input(int index);u64 global_output(int index);TGCustomizedParameter* mutable_tg_customized_param();const TGCustomizedParameter& tg_customized_param();};}
void TensorOp::input_shape_size()
Return the number of inputs.
void TensorOp::output_shape_size()
Return the number of outputs.
const TensorShape& TensorOp::input_shape( int index)
Return shape of input by index.
Parameter | Type | Description |
index | int | [Required] index of input that to be returned. |
const TensorShape& TensorOp::output_shape(int index)
Return shape of output by index.
Parameter | Type | Description |
index | int | [Required] index of output that to be returned. |
TensorShape* TensorOp::add_output_shape()
Return a mutable pointer to a new added TensorShape of outputs. The returned TensorShape could be modified latter.
u64 TensorOp::global_input(int index)
Return offset of input tensor by index, while it was stored in device memory.
Parameter | Type | Description |
index | int | [Required] index of input that to be returned. |
u64 TensorOp::global_output(int index)
Return offset of output tensor by index, while it was stored in device memory.
Parameter | Type | Description |
index | int | [Required] index of output that to be returned. |
TGCustomizedParameter* TensorOp::mutable_tg_customized_param()
Return a mutable pointer to parameters of customized BMNET IR.
const TGCustomizedParameter& TensorOp::tg_customized_param()
Return reference of customized BMNET IR’s paramters.
CustomizedCaffeLayer is abstract class, which is used to implement a Layer to convert CAFFE Layer into BMNet IR(please refer to Chapter 5 for details about BMNet IR). If you want to introduce a customized CAFFE layer into BMNet, please inherit this class and implement all pure virtual functions of it. The CustomizedCaffeLayer inherits from CaffeLayer/Layer class. Below are the prototypes of them:
namespace bmnet {class Layer {public:Layer();virtual ~Layer(void);virtual std::string layer_name() = 0;virtual void dump () = 0;virtual void codegen(TensorOp *op) = 0;protected:void add_output_offset(int offset);};}namespace bmnet {class CaffeLayer : public Layer {public:CaffeLayer(){}virtual ~CaffeLayer(void);protected:caffe::LayerParameter &layer_;};}namespace bmnet {class CustomizedCaffeLayer : public CaffeLayer {public:CustomizedCaffeLayer();~CustomizedCaffeLayer();void setup(TensorOp* op) override {......TGCustomizedParameter* param = op->mutable_tg_customized_param ();param->set_sub_type(layer_name());}};}
std::string CustomizedCaffelayer::layer_name()
Pure virtual function, return type of new added CAFFE layer.
Pure virtual function, is used to print information of CAFFE Layer.
void CustomizedCaffelayer::setup()
Option. It is used to set sub type of Customized Layer only. Implement by default. If child class will override it, this parent class setup function must be call first.
Pure virtual function, is used to setup BMNET IR according to LayerParameter of CAFFE Layer. In this function, you should setup output shape and fill parameters to TensorOp.
TBD
Parameter | Type | Description |
op | TensorOp* | [Required] pointer to a instance of BMNET IR |
void CustomizedCaffelayer::add_output_offset (int offset)
Protected member method, should be called when setup output offset of Layer’s top.
Parameter | Type | Description |
offset | int | [Required] offset of output, should be 0. |
caffe::LayerParameter CustomizedCaffelayer::&layer_
Protected member variable, which is reference of customized CAFFE layer’s LayerParameter.
CustomizedTensorFixedInst is abstract class, which is used to implement a Layer to convert BMNET IR into instructions by BMKernel APIs. Please inherit this class and implement all pure virtual functions of it. The CustomizedTensorFixedInst inherits from TensorFixedInst/ TensorInst class. Below are the prototypes of them:
namespace bmnet {class TensorFixedInst: public TensorInst {public:TensorFixedInst() : TensorInst() {}TensorFixedInst(TensorOp &op) : TensorInst(op) {}virtual ~ TensorFixedInst (void);void SetCalibrationParameter(const LayerCalibrationParameter &calibration_parameter) {m_calibrationParameter = calibration_parameter;}void AddInputCalibrationParameter(const LayerCalibrationParameter &calibration_parameter){m_inputCalibrationParameter.push_back(calibration_parameter);}protected:LayerCalibrationParameter m_calibrationParameter;std::vector <LayerCalibrationParameter >m_inputCalibrationParameter;};}namespace bmnet {class TensorInst {public:TensorInst();virtual ~TensorInst(void);virtual std::string inst_name() = 0;virtual void dump () = 0;virtual void encode () = 0;protected:TensorOp &op_;};}namespace bmnet {class CustomizedTensorFixedInst : public TensorFixedInst {public:CustomizedTensorFixedInst ();~CustomizedTensorFixedInst ();protected:u64 get_global_neuron_base();u64 get_global_weight_base();};}
std::string CustomizedTensorFixedInst::inst_name()
Pure virtual function, return type of customized BMNET IR.
void CustomizedTensorFixedInst::dump()
Pure virtual function, is used to print information of BMNET IR.
void CustomizedTensorFixedInst::encode()
Pure virtual function, is used to convert BMNET IR into instructions using BMKernel APIs.
u64 CustomizedTensorFixedInst::get_global_neuron_base()
Protected member method, return the base address, where the neurons are stored in device memory.
u64 CustomizedTensorFixedInst::get_global_weight_base()
Protected member method, return the base address, where weight is stored in device memory.
TensorOp CustomizedTensorFixedInst::&op_
Protected member variable, which is reference of BMNET IR.
TGCustomizedParamter represents a customized BMNET IR’s parameters. It provides member methods to set parameters to or get from it. Below is the prototype:
namespace bmnet {class TGCustomizedParameter {public:int i32_param_size();int f32_param_size();int i32_param(int index);float f32_param(int index);void add_i32_param(int value);void add_f32_param(float value);};}
void TGCustomizedParamter::i32_param_size()
Return the number of int parameters, which stored in TGCustomizedParamter.
void TGCustomizedParamter::f32_param_size()
Return the number of float parameters, which stored in TGCustomizedParamter.
int TGCustomizedParamter::i32_param(int index)
Return int parameter by index.
Parameter | Type | Description |
index | index | [Required] index of int parameter that to be returned. |
float TGCustomizedParamter::f32_param( int index)
Return int parameter by index.
Parameter | Type | Description |
index | index | [Required] index of float parameter that to be returned. |
void TGCustomizedParamter::add_i32_param(int value)
Append a new int parameter to TGCustomizedParamter.
Parameter | Type | Description |
value | int | [Required] int parameter. |
void TGCustomizedParamter::add_f32_param(int value)
Append a new int parameter to TGCustomizedParamter.
Parameter | Type | Description |
value | float | [Required] float parameter. |
TensorShape represents a shape of tensor. Below is the prototype:
namespace bmnet {class TensorShape {public:void CopyFrom(const TensorShape& from);int dim_size() const;int dim(int index);void add_dim(int value);};}
int TensorShape::dim_size()
Return the number of dims.
int TensorShape::dim(int index)
Return one dim by index.
Parameter | Type | Description |
Index | int | [Required] index of dim that to be returned. |
void TensorShape::add_dim(int value)
Append a dim to TensorShape.
Parameter | Type | Description |
value | int | [Required] new dim to be appended. |
void TensorShape::CopyFrom(const TensorShape& from)
Copy from another TensorShape instance.
Parameter | Type | Description |
value | const TensorShape& | [Required] source TensorShape instance. |
CaffeBuilder is a class, which provides a uniform interface to combine front end/optimizer/back end core code into one, to compile CAFFE neuron network graph into bmodel file. The CaffeBuilder inherits from Builder class, which is a base compiler class. Below are the prototypes of them:
namespace bmnet {class Builder {public:Builder(CHIP_VER ver);virtual ~Builder();void addCustomizedTensorInst(TensorInst *inst);void build(int n, int c, int h, int w, int opt);void store_prototxt(const char *dst);void store_model(const char *net_name, const char *dst);};}
namespace bmnet {class CaffeBuilder : public Builder {public:CaffeBuilder(CHIP_VER ver, const char *modified_proto , const char *caffemodel , const char *weight_bin , const char *in_ctable , const char *out_ctable);~CaffeBuilder();void addCustomizedLayer(Layer *layer);};
CaffeBuilder::CaffeBuilder(CHIP_VER ver ,const char *modified_proto ,const char *caffemodel ,const char *weight_bin ,const char *in_ctable ,const char *out_ctable)
Constructor function of CaffeBuilder class.
Parameter | Type | Description |
ver | CHIP_VER | [Required] The target chip version. Currently only BM_CHIP_BM1880 is available. |
modified_proto | const char* | [Optional] The modified prototxt file, please refer Chapter 4 to get more detail. |
caffemodel | const char* | [Required] The specified caffemode file of network |
weight_bin | const char* | [Optional] The specified weight file of network |
in_ctable | const char* | [Required] The specified calibration table file of network |
out_ctable | const char* | [Required] The specified weight file of network |
modified_proto are optional parameters, that means you no need to fill all of this parameters. Below combination are valid: 1) caffemodel only; 2) caffemodel, as well as modified_protos
Core member function of CaffeBuilder class, used to compile the network by specifying input shape and optimization level.
Parameter | Type | Description |
n,c,h,w | int | [Required] The input shape |
opt | int | [Optional] The input optimization options. The default value is BM_OPT_LAYER_GROUP_WITH_WEIG |
Below are the values for opt.
value | Description |
OPT_NONE | No optimization |
BM_OPT_LAYER_GROUP | Divides layers into clusters to optimize the bandwidth overhead. |
BM_OPT_LAYER_GROUP_WITH_WEIG | Add additional optimization to reduce the device memory footprint and reshape weight. |
store the optimized network graph as a file.
Parameter | Type | Description |
dst | const char* | [Required] File to be stored |
void CaffeBuilder::store_model(const char* net_name ,const char* dst,onst char* plugin_path=nullptr)
Store compiled instructions, weight and other information of the network as a bmodel file.
Parameter | Type | Description |
net_name | const char* | [Required] the network name. |
dst | const char* | [Required] File to store bmodel. |
Plugin_path | const char* | [Required] cpu op plugins. |
void CaffeBuilder::addCustomizedLayer( Layer* layer)
Register a new added customized layer, which used to convert CAFFE layer into BMNet IR (Intermediate representation).
Parameter | Type | Description |
Layer | Layer* | [Required] pointer to instance of Class Layer |
void CaffeBuilder::addCustomizedTensorInst(TensorInst* inst)
Register a new added customized TensorInst (Tensor Instruction), which used to convert BMNet IR into instructions.
Parameter | Type | Description |
inst | TensorInst* | [Required] pointer to instance of Class Layer |