这是 HHB 2.6 的 release note,适用于搭载玄铁 CPU 的各类芯片开发板。
This is the release note of HHB 2.6, which is suitable for all kinds of chip development boards equipped with Xuantie CPU.
当前版本包括了一些功能加强和问题修复的说明。
The release includes some functional enhancements and bug fixes.
Features and enhancements
当前版本新增如下特性和加强
- 新增了 c920v2 目标
- 可选第三方量化工具 ppq
The current version adds the following features and enhancements
- Added c920v2 target
- Optional third-party quantification tool ppq
Limitations
当前版本有如下限制:
- th1520 平台的 NPU 上,softmax 不能作为第一层。更多详细信息参考平台支持的 OP 列表
- BERT,mobileVit,swin-transformer 和 facedetect 模型的权重使用 8bit 量化算法,会有精度问题
The current version has the following limitations:
- On the NPU of the th1520 platform, softmax cannot be used as the first layer. For more details, refer to the OP list supported by the platform
- The weights of BERT, mobileVit, swin-transformer and facedetect models use 8bit quantization algorithm, which may cause accuracy problems
Bug fixes
当前版本修复了以下问题:
The current version fixes the following issues:
Known issues
当前版本有如下已知问题:
- th1520 平台的 NPU 上,部分 leaky relu + add, split + concat,concat + concat 的组合会造成精度异常。
The current version has the following known issues:
- On the NPU of the th1520 platform, some combinations of leaky relu + add, split + concat, and concat + concat will cause abnormal precision.
Deprecated features
当前版本开始,以下特性不再支持或者不再推荐使用:
- --channel-quantization: 不再可用,相应的 int4_asym_w_sym,int8_asym_w_sym 和 float16_w_int8 默认使用通道量化。
Starting from the current version, the following features are no longer supported or recommended:
- --channel-quantization: No longer available, corresponding int4_asym_w_sym, int8_asym_w_sym and float16_w_int8 use channel quantization by default.