# Caffe

## Network Configuration

## TensorOp

TensorOp represents a BMNET IR, which is a bridge between front end and back end. it provides lots of member method to set information to or get from it. Below is the prototype:

```c
namespace bmnet { 
class TensorOp {
public:
  int input_shape_size();
  int output_shape_size();
  const TensorShape& input_shape(int index);
  const TensorShape& output_shape(int index);
  TensorShape* add_output_shape();
  u64 global_input(int index);
  u64 global_output(int index);
  TGCustomizedParameter* mutable_tg_customized_param(); 
  const TGCustomizedParameter& tg_customized_param();
};
}
```

### TensorOp::input\_shape\_size

```c
void TensorOp::input_shape_size()
```

Return the number of inputs.

### TensorOp::output\_shape\_size

```c
void TensorOp::output_shape_size()
```

Return the number of outputs.

### TensorOp::input\_shape

```cpp
const TensorShape& TensorOp::input_shape( int index)
```

Return shape of input by index.

| Parameter | Type | Description                                     |
| --------- | ---- | ----------------------------------------------- |
| index     | int  | \[Required] index of input that to be returned. |

### TensorOp::output\_shape

```cpp
const TensorShape& TensorOp::output_shape(int index)
```

Return shape of output by index. 

| Parameter | Type | Description                                      |
| --------- | ---- | ------------------------------------------------ |
| index     | int  | \[Required] index of output that to be returned. |

### TensorOp::add\_output\_shape

```cpp
TensorShape* TensorOp::add_output_shape()
```

Return a mutable pointer to a new added TensorShape of outputs. The returned TensorShape could be modified latter.

### TensorOp::global\_input

```cpp
u64 TensorOp::global_input(int index) 
```

Return offset of input tensor by index, while it was stored in device memory.

| Parameter | Type | Description                                     |
| --------- | ---- | ----------------------------------------------- |
| index     | int  | \[Required] index of input that to be returned. |

### TensorOp::global\_output

```cpp
u64 TensorOp::global_output(int index) 
```

Return offset of output tensor by index, while it was stored in device memory.

| Parameter | Type | Description                                      |
| --------- | ---- | ------------------------------------------------ |
| index     | int  | \[Required] index of output that to be returned. |

### TensorOp::mutable\_tg\_customized\_param

```cpp
TGCustomizedParameter* TensorOp::mutable_tg_customized_param() 
```

Return a mutable pointer to parameters of customized BMNET IR.

### TensorOp::tg\_customized\_param

```cpp
const TGCustomizedParameter& TensorOp::tg_customized_param() 
```

Return reference of customized BMNET IR’s paramters.

## CustomizedCaffeLayer

CustomizedCaffeLayer is abstract class, which is used to implement a Layer to convert CAFFE Layer into BMNet IR(please refer to Chapter 5 for details about BMNet IR). If you want to introduce a customized CAFFE layer into BMNet, please inherit this class and implement all pure virtual functions of it. The CustomizedCaffeLayer inherits from CaffeLayer/Layer class. Below are the prototypes of them:

```cpp
namespace bmnet {
class Layer {
 public:
   Layer();
   virtual  ~Layer(void);
   virtual  std::string layer_name() = 0;
   virtual  void dump () = 0;
   virtual  void codegen(TensorOp *op) = 0;

protected:
   void add_output_offset(int offset);
};
} 

namespace bmnet {

class CaffeLayer : public Layer { 
public:
   CaffeLayer(){}
   virtual ~CaffeLayer(void); 
protected:
   caffe::LayerParameter &layer_; 
};
} 

namespace bmnet {

class CustomizedCaffeLayer : public CaffeLayer { 
public:
   CustomizedCaffeLayer(); 
   ~CustomizedCaffeLayer();
   void setup(TensorOp* op) override {
 ... 
 ...
     TGCustomizedParameter* param = op->mutable_tg_customized_param ();
     param->set_sub_type(layer_name()); 
    }
};
}
```

### CustomizedCaffeLayer::layer\_name

```cpp
std::string CustomizedCaffelayer::layer_name() 
```

Pure virtual function, return type of new added CAFFE layer.

### CustomizedCaffeLayer::dump

```cpp
Pure virtual function, is used to print information of CAFFE Layer.
```

### CustomizedCaffeLayer:: setup

```cpp
void CustomizedCaffelayer::setup() 
```

Option. It is used to set sub type of Customized Layer only. Implement by default. If child class will override it, this parent class setup function must be call first.

### CustomizedCaffeLayer::codegen

Pure virtual function, is used to setup BMNET IR according to LayerParameter of CAFFE Layer. In this function, you should setup output shape and fill parameters to TensorOp.

```
TBD
```

| Parameter | Type       | Description                                   |
| --------- | ---------- | --------------------------------------------- |
| op        | TensorOp\* | \[Required] pointer to a instance of BMNET IR |

### CustomizedCaffeLayer::add\_output\_offset

```cpp
void CustomizedCaffelayer::add_output_offset (int offset)
```

Protected member method, should be called when setup output offset of Layer’s top.

| Parameter | Type | Description                                |
| --------- | ---- | ------------------------------------------ |
| offset    | int  | \[Required] offset of output, should be 0. |

### CustomizedCaffeLayer::layer\_

```cpp
caffe::LayerParameter CustomizedCaffelayer::&layer_ 
```

Protected member variable, which is reference of customized CAFFE layer’s LayerParameter.

## CustomizedTensorFixedInst

CustomizedTensorFixedInst is abstract class, which is used to implement a Layer to convert BMNET IR into instructions by BMKernel APIs. Please inherit this class and implement all pure virtual functions of it. The CustomizedTensorFixedInst inherits from TensorFixedInst/ TensorInst class. Below are the prototypes of them:

```cpp
namespace bmnet {
class TensorFixedInst: public TensorInst { 
public:
  TensorFixedInst() : TensorInst() {} 
  TensorFixedInst(TensorOp &op) : TensorInst(op) {} 
  virtual ~ TensorFixedInst (void);
  void SetCalibrationParameter(
     const LayerCalibrationParameter &calibration_parameter) { 
	m_calibrationParameter = calibration_parameter;
}
  void AddInputCalibrationParameter(
     const LayerCalibrationParameter &calibration_parameter){  
	m_inputCalibrationParameter.push_back(calibration_parameter);
} 
protected:
  LayerCalibrationParameter m_calibrationParameter; 
  std::vector <LayerCalibrationParameter >
     m_inputCalibrationParameter;
};
}

namespace bmnet {
class TensorInst {
public:
  TensorInst();
  virtual ~TensorInst(void);
  virtual std::string inst_name() = 0;
  virtual void dump () = 0;
  virtual void encode () = 0;

protected:
  TensorOp &op_; 
};
}

namespace bmnet {

class CustomizedTensorFixedInst : public TensorFixedInst { 
public:
  CustomizedTensorFixedInst ();
  ~CustomizedTensorFixedInst (); 
protected:
  u64 get_global_neuron_base();
  u64 get_global_weight_base();
}; 
}

```

### CustomizedTensorFixedInst::inst\_name

```cpp
std::string CustomizedTensorFixedInst::inst_name() 
```

Pure virtual function, return type of customized BMNET IR.

### CustomizedTensorFixedInst::dump

```cpp
void CustomizedTensorFixedInst::dump() 
```

Pure virtual function, is used to print information of BMNET IR.

### CustomizedTensorFixedInst::encode

```cpp
void CustomizedTensorFixedInst::encode()
```

Pure virtual function, is used to convert BMNET IR into instructions using BMKernel APIs.

### CustomizedTensorFixedInst::get\_global\_neuron\_base

```cpp
u64 CustomizedTensorFixedInst::get_global_neuron_base() 
```

Protected member method, return the base address, where the neurons are stored in device memory.

### CustomizedTensorFixedInst::get\_global\_weight\_base

```cpp
u64 CustomizedTensorFixedInst::get_global_weight_base() 
```

Protected member method, return the base address, where weight is stored in device memory.

### CustomizedTensorFixedInst::op\_

```cpp
TensorOp CustomizedTensorFixedInst::&op_ 
```

Protected member variable, which is reference of BMNET IR.

## TGCustomizedParamter

TGCustomizedParamter represents a customized BMNET IR’s parameters. It provides member methods to set parameters to or get from it. Below is the prototype:

```cpp
namespace bmnet {

class TGCustomizedParameter {
public:
  int i32_param_size();
  int f32_param_size();
  int i32_param(int index);
  float f32_param(int index);
  void add_i32_param(int value);
  void add_f32_param(float value);
};
}
```

### TGCustomizedParamter::i32\_param\_size

```cpp
void TGCustomizedParamter::i32_param_size() 
```

Return the number of int parameters, which stored in TGCustomizedParamter.

### TGCustomizedParamter::f32\_param\_size

```cpp
void TGCustomizedParamter::f32_param_size() 
```

Return the number of float parameters, which stored in TGCustomizedParamter.

### TGCustomizeParamter::i32\_param

```cpp
int TGCustomizedParamter::i32_param(int index) 
```

Return int parameter by index.

| Parameter | Type  | Description                                             |
| --------- | ----- | ------------------------------------------------------- |
| index     | index | \[Required] index of int parameter that to be returned. |

### TGCustomizeParamter::f32\_param

```cpp
float TGCustomizedParamter::f32_param( int index) 	
```

Return int parameter by index.

| Parameter | Type  | Description                                               |
| --------- | ----- | --------------------------------------------------------- |
| index     | index | \[Required] index of float parameter that to be returned. |

### TGCustomizeParamter::add\_i32\_param

```cpp
void TGCustomizedParamter::add_i32_param(int value)
```

Append a new int parameter to TGCustomizedParamter.

| Parameter | Type | Description                 |
| --------- | ---- | --------------------------- |
| value     | int  | \[Required]  int parameter. |

### TGCustomizeParamter::add\_f32\_param

```cpp
void TGCustomizedParamter::add_f32_param(int value)
```

Append a new int parameter to TGCustomizedParamter.

| Parameter | Type  | Description                   |
| --------- | ----- | ----------------------------- |
| value     | float | \[Required]  float parameter. |

## TensorShape

TensorShape represents a shape of tensor. Below is the prototype:

```cpp
namespace bmnet {

class TensorShape { 
public:
  void CopyFrom(const TensorShape& from); 
  int dim_size() const;
  int dim(int index);
  void add_dim(int value);
};
}
```

### TensorShape::dim\_size

```cpp
int TensorShape::dim_size()
```

Return the number of dims.

### TensorShape::dim

```cpp
int TensorShape::dim(int index) 
```

Return one dim by index.

| Parameter | Type | Description                                    |
| --------- | ---- | ---------------------------------------------- |
| Index     | int  | \[Required]  index of dim that to be returned. |

### TensorShape::add\_dim

```cpp
void TensorShape::add_dim(int value)
```

Append a dim to TensorShape.

| Parameter | Type | Description                          |
| --------- | ---- | ------------------------------------ |
| value     | int  | \[Required]  new dim to be appended. |

### TensorShape::CopyFrom

```cpp
void TensorShape::CopyFrom(const TensorShape& from)
```

Copy from another TensorShape instance.

| Parameter | Type               | Description                               |
| --------- | ------------------ | ----------------------------------------- |
| value     | const TensorShape& | \[Required]  source TensorShape instance. |

## CaffeBuilder

CaffeBuilder is a class, which provides a uniform interface to combine front end/optimizer/back end core code into one, to compile CAFFE neuron network graph into bmodel file. The CaffeBuilder inherits from Builder class, which is a base compiler class. Below are the prototypes of them:

```cpp
namespace bmnet { 

class Builder {
public:
  Builder(CHIP_VER ver);
  virtual ~Builder();
  void addCustomizedTensorInst(TensorInst *inst);
  void build(int n, int c, int h, int w, int opt);
  void store_prototxt(const char *dst);
  void store_model(const char *net_name, const char *dst);
};
}
```

```cpp
namespace bmnet { 

class CaffeBuilder : public Builder { 
public:
  CaffeBuilder(CHIP_VER ver, const char *modified_proto , const char *caffemodel , const char *weight_bin , const char *in_ctable , const char *out_ctable);
  ~CaffeBuilder();
  void addCustomizedLayer(Layer *layer);
};
```

### CaffeBuilder::CaffeBuilder

```cpp
CaffeBuilder::CaffeBuilder(
CHIP_VER ver ,
const char *modified_proto ,
const char *caffemodel ,
const char *weight_bin ,
const char *in_ctable ,
const char *out_ctable)
```

Constructor function of CaffeBuilder class.

| Parameter       | Type         | Description                                                                                      |
| --------------- | ------------ | ------------------------------------------------------------------------------------------------ |
| ver             | CHIP\_VER    | <p>\[Required]  The target chip version. Currently only BM\_CHIP\_BM1880 is</p><p>available.</p> |
| modified\_proto | const char\* | \[Optional] The modified prototxt file, please refer Chapter 4 to get more detail.               |
| caffemodel      | const char\* | \[Required] The specified caffemode file of network                                              |
| weight\_bin     | const char\* | \[Optional] The specified weight file of network                                                 |
| in\_ctable      | const char\* | \[Required] The specified calibration table file of network                                      |
| out\_ctable     | const char\* | \[Required] The specified weight file of network                                                 |

&#x20;modified\_proto are optional parameters, that means you no need to fill all of this parameters. Below combination are valid: 1) caffemodel only; 2) caffemodel, as well as modified\_protos

### CaffeBuilder::Builder

Core member function of CaffeBuilder class, used to compile the network by specifying input shape and optimization level.

| Parameter | Type | Description                                                                                        |
| --------- | ---- | -------------------------------------------------------------------------------------------------- |
| n,c,h,w   | int  | \[Required]  The input shape                                                                       |
| opt       | int  | \[Optional] The input optimization options. The default value is BM\_OPT\_LAYER\_GROUP\_WITH\_WEIG |

Below are the values for opt.

| value                             | Description                                                                           |
| --------------------------------- | ------------------------------------------------------------------------------------- |
| OPT\_NONE                         | No optimization                                                                       |
| BM\_OPT\_LAYER\_GROUP             | Divides layers into clusters to optimize the bandwidth overhead.                      |
| BM\_OPT\_LAYER\_GROUP\_WITH\_WEIG | Add additional optimization to reduce the device memory footprint and reshape weight. |

### CaffeBuilder::store\_prototxt

store the optimized network graph as a file.

| Parameter | Type         | Description                    |
| --------- | ------------ | ------------------------------ |
| dst       | const char\* | \[Required]  File to be stored |

### CaffeBuilder::store\_model

```cpp
void CaffeBuilder::store_model(
const char* net_name ,
const char* dst,
onst char* plugin_path=nullptr)
```

Store compiled instructions, weight and other information of the network as a bmodel file.

| Parameter    | Type         | Description                        |
| ------------ | ------------ | ---------------------------------- |
| net\_name    | const char\* | \[Required]  the network name.     |
| dst          | const char\* | \[Required]  File to store bmodel. |
| Plugin\_path | const char\* | \[Required]  cpu op plugins.       |

### CaffeBuilder::addCustomizedLayer

```cpp
void CaffeBuilder::addCustomizedLayer( Layer* layer)
```

Register a new added customized layer, which used to convert CAFFE layer into BMNet IR (Intermediate representation).

| Parameter | Type    | Description                                     |
| --------- | ------- | ----------------------------------------------- |
| Layer     | Layer\* | \[Required]  pointer to instance of Class Layer |

### CaffeBuilder::addCustomizedTensorInst

```cpp
void CaffeBuilder::addCustomizedTensorInst(TensorInst* inst)
```

Register a new added customized TensorInst (Tensor Instruction), which used to convert BMNet IR into instructions.

| Parameter | Type         | Description                                     |
| --------- | ------------ | ----------------------------------------------- |
| inst      | TensorInst\* | \[Required]  pointer to instance of Class Layer |


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://sophon-edge.gitbook.io/project/frameworks/caffe.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
