Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache form across ML transform types #678

Open
wants to merge 4 commits into
base: 2.x
Choose a base branch
from

Conversation

ohltyler
Copy link
Member

@ohltyler ohltyler commented Mar 13, 2025

Description

Adds caches for the input/output forms for ML processors, such that values are cached across different transform types. The idea is users can easily toggle between types and not lose lost configuration. Note this is only cached while the component is loaded on the page; a navigation away or page refresh will lose the cached state.

Implementation-wise, this adds a util fn shared across model_inputs and model_outputs to update a cache, and pre-populate values from the cache, as users change underlying transform types within the input/output maps.

Demo video:

screen-capture.30.webm

Check List

  • Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Signed-off-by: Tyler Ohlsen <ohltyler@amazon.com>
Signed-off-by: Tyler Ohlsen <ohltyler@amazon.com>
Signed-off-by: Tyler Ohlsen <ohltyler@amazon.com>
@ohltyler ohltyler changed the title Persist transform values across transform types Cache form across ML transform types Mar 14, 2025
Signed-off-by: Tyler Ohlsen <ohltyler@amazon.com>
@ohltyler ohltyler marked this pull request as ready for review March 14, 2025 18:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport main enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant