Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use get_max_new_tokens() insted of max_new_tokens field when stopping… #1417

Merged

Conversation

michalkulakowski
Copy link
Contributor

… generation

@github-actions github-actions bot added the category: sampling Sampling / Decoding algorithms label Dec 20, 2024
Copy link
Contributor

@ilya-lavrenov ilya-lavrenov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not all places inside src/cpp are changed

@mzegla
Copy link
Collaborator

mzegla commented Dec 20, 2024

One more thing - I believe max_length is now loaded from generation config. Isn't it a model property that is not meant to be a per generation configuration? @michalkulakowski I know you have logic to read that value in OVMS now. Maybe we could move it here and make it a pipeline member. This way it could be used in both OVMS and standalone GenAI app.

@michalkulakowski
Copy link
Contributor Author

One more thing - I believe max_length is now loaded from generation config. Isn't it a model property that is not meant to be a per generation configuration? @michalkulakowski I know you have logic to read that value in OVMS now. Maybe we could move it here and make it a pipeline member. This way it could be used in both OVMS and standalone GenAI app.

That makes sense to me. @ilya-lavrenov what do you think?

@ilya-lavrenov
Copy link
Contributor

ilya-lavrenov commented Jan 4, 2025

One more thing - I believe max_length is now loaded from generation config. Isn't it a model property that is not meant to be a per generation configuration? @michalkulakowski I know you have logic to read that value in OVMS now. Maybe we could move it here and make it a pipeline member. This way it could be used in both OVMS and standalone GenAI app.

I suppose it depends on the model:

Looks like max_model_length (which is config.max_position_embeddings, example is https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/config.json#L13) and max_length from generation_config.json are different things, are not they?

Maybe we can have similar behavior for GenAI and add some defaults similar to HF?

@Wovchena @pavel-esir @as-suvorov what is your opinion?

@michalkulakowski
Copy link
Contributor Author

@Wovchena @pavel-esir @as-suvorov please share your opinion

@ilya-lavrenov
Copy link
Contributor

@Wovchena @pavel-esir @as-suvorov please share your opinion

I think even w/o default value for max new tokens, we can go with other changes which will respect max_length

@github-actions github-actions bot removed the category: sampling Sampling / Decoding algorithms label Mar 4, 2025
@ilya-lavrenov
Copy link
Contributor

build_jenkins

@ilya-lavrenov ilya-lavrenov added this to the 2025.1 milestone Mar 4, 2025
@ilya-lavrenov
Copy link
Contributor

Please, fix compilation

@ilya-lavrenov
Copy link
Contributor

build_jenkins

@ilya-lavrenov ilya-lavrenov enabled auto-merge March 5, 2025 13:54
@ilya-lavrenov ilya-lavrenov added this pull request to the merge queue Mar 5, 2025
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 5, 2025
@ilya-lavrenov ilya-lavrenov merged commit 0214ba8 into openvinotoolkit:master Mar 5, 2025
61 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants