Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

update ci paths #164

Merged
merged 7 commits into from
Aug 25, 2023
Merged

update ci paths #164

merged 7 commits into from
Aug 25, 2023

Conversation

XuehaoSun
Copy link
Contributor

Type of Change

update pre ci format scan path

Description

add
- neural_chat/**
- workflows/**

Expected Behavior & Potential Risk

No

How has this PR been tested?

pre ci

Dependency Change?

no

Signed-off-by: Sun, Xuehao <xuehao.sun@intel.com>
@XuehaoSun
Copy link
Contributor Author

@lvliang-intel please fix bandit issue

lkk12014402 and others added 5 commits August 24, 2023 01:34
Signed-off-by: Lv, Kaokao <kaokao.lv@intel.com>
* add main_chat and quant_model

* move model_tokenize and model_init_from_gpt_params into utils

* refactor cmake

* rm old application folders

* clean code

* unify applications CMakeLists

---------

Signed-off-by: Yu, Zhentao <zhentao.yu@intel.com>
Signed-off-by: changwangss <chang1.wang@intel.com>
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
add fusion support for llama

add fp8 ffn_silu fusion

fix hasISA issue

fix gcc9 compile

fix bug of fp16 weight's quant

fix 4bit size

add fusion support  for gemm_add

enable ffn_gelu_add

sync jblas, pass compilation

fix gcc error

fix bug. remove lm_head from non_quant

fix mha

sync with QBits updates.

fix f4 scale

Synchronize jblas code.

Remove the high gcc version requirement.

auto-fusion: depends on weight type and runtime ISA support.

---------

Signed-off-by: luoyu-intel <yu.luo@intel.com>
Co-authored-by: Ding, Yi1 <yi1.ding@intel.com>
Co-authored-by: Wang, Zhe1 <zhe1.wang@intel.com>
Signed-off-by: lvliang-intel <liang1.lv@intel.com>
@XuehaoSun XuehaoSun merged commit 925051c into main Aug 25, 2023
@XuehaoSun XuehaoSun deleted the xuehao/update_ci branch August 25, 2023 02:12
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants