Skip to content

Commit 76ddad7

Browse files
authored
[Version] v1.7.0. (#433)
1 parent a2d33c6 commit 76ddad7

File tree

2 files changed

+23
-1
lines changed

2 files changed

+23
-1
lines changed

CHANGELOG.md

+22
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,26 @@
11
# CHANGELOG
2+
# [Version v1.7.0](https://github.com/intel/xFasterTransformer/releases/tag/v1.7.0)
3+
v1.7.0 - Continuous batching feature supported.
4+
5+
## Functionality
6+
- Refactor framework to support continuous batching feature. `vllm-xft`, a fork of vllm, integrates the xFasterTransformer backend and maintains compatibility with most of the official vLLM's features.
7+
- Remove FP32 data type option of KV Cache.
8+
- Add `get_env()` python API to get recommended LD_PRELOAD set.
9+
- Add GPU build option for Intel Arc GPU series.
10+
- Exposed the interface of the LLaMA model, including Attention and decoder.
11+
12+
## Performance
13+
- Update xDNN to release `v1.5.1`
14+
- Baichuan series models supports full FP16 pipline to improve performance.
15+
- More FP16 data type kernel added, including MHA, MLP, YARN rotary_embedding, rmsnorm and rope.
16+
- Kernel implementation of crossAttnByHead.
17+
18+
## Dependency
19+
- Bump `torch` to `2.3.0`.
20+
21+
## BUG fix
22+
- Fixed the segament fault error when running with more than 4 ranks.
23+
- Fixed the bugs of core dump && hang when running croos nodes.
224

325
# [Version v1.6.0](https://github.com/intel/xFasterTransformer/releases/tag/v1.6.0)
426
v1.6.0 - Llama3 and Qwen2 series models supported.

VERSION

+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
1.6.0
1+
1.7.0

0 commit comments

Comments
 (0)