Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Executorch][llama] Change runner to decouple prompt length from sequence length #9594

Merged
merged 2 commits into from
Mar 26, 2025

Conversation

kimishpatel
Copy link
Contributor

@kimishpatel kimishpatel commented Mar 25, 2025

Stack from ghstack (oldest at bottom):

length

Following previous diff now we can utilize entire kv cache to generate more
tokens than max prompt length allowed.

Differential Revision: D69073908

…ence

length

Following previous diff now we can utilize entire kv cache to generate more
tokens than max prompt length allowed.

Differential Revision: [D69073908](https://our.internmc.facebook.com/intern/diff/D69073908/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Mar 25, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/9594

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit f4f089e with merge base 644b7dd (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

kimishpatel added a commit that referenced this pull request Mar 25, 2025
…ence

length

Following previous diff now we can utilize entire kv cache to generate more
tokens than max prompt length allowed.

Differential Revision: [D69073908](https://our.internmc.facebook.com/intern/diff/D69073908/)

ghstack-source-id: 273982703
Pull Request resolved: #9594
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 25, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D69073908

@kimishpatel kimishpatel added the release notes: examples Changes to any of our example LLMs integrations, such as Llama3 and Llava label Mar 25, 2025
…h from sequence

length"


length

Following previous diff now we can utilize entire kv cache to generate more
tokens than max prompt length allowed.

Differential Revision: [D69073908](https://our.internmc.facebook.com/intern/diff/D69073908/)

[ghstack-poisoned]
kimishpatel added a commit that referenced this pull request Mar 25, 2025
…ence

length

Pull Request resolved: #9594

Following previous diff now we can utilize entire kv cache to generate more
tokens than max prompt length allowed.

Differential Revision: [D69073908](https://our.internmc.facebook.com/intern/diff/D69073908/)
ghstack-source-id: 274018812
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D69073908

@facebook-github-bot facebook-github-bot merged commit dcfa538 into gh/kimishpatel/162/base Mar 26, 2025
81 of 83 checks passed
@facebook-github-bot facebook-github-bot deleted the gh/kimishpatel/162/head branch March 26, 2025 16:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported release notes: examples Changes to any of our example LLMs integrations, such as Llama3 and Llava
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants