Skip to content

Commit

Permalink
chore: add cli animate arguments (#69)
Browse files Browse the repository at this point in the history
Useful both for CLI usage as for content creation (e.g. Colab notebook
with examples).
  • Loading branch information
eloy-encord authored Apr 30, 2024
1 parent a7d5de7 commit 2390b5f
Show file tree
Hide file tree
Showing 3 changed files with 36 additions and 10 deletions.
16 changes: 12 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,12 +63,12 @@ You can easily benchmark different models and datasets against each other. Here
### Embeddings Generation

To build embeddings, run the CLI command `tti-eval build`.
This commands allows you to interactively select the model and dataset combinations on which to build the embeddings.
This command allows to interactively select the model and dataset combinations on which to build the embeddings.

Alternatively, you can choose known (model, dataset) pairs using the `--model-dataset` option. For example:

```
tti-eval build --model-dataset clip/plants
tti-eval build --model-dataset clip/Alzheimer-MRI --model-dataset bioclip/Alzheimer-MRI
```

### Model Evaluation
Expand All @@ -79,16 +79,24 @@ This command enables interactive selection of model and dataset combinations for
Alternatively, you can specify known (model, dataset) pairs using the `--model-dataset` option. For example:

```
tti-eval evaluate --model-dataset clip/plants
tti-eval evaluate --model-dataset clip/Alzheimer-MRI --model-dataset bioclip/Alzheimer-MRI
```

### Embeddings Animation

To create 2D animations of the embeddings, use the CLI command `tti-eval animate`.
This command allows to visualise the reduction of embeddings from two models on the same dataset.

You have the option to interactively select two models and a dataset for visualization.
Alternatively, you can specify the models and dataset as arguments. For example:

```
tti-eval animate clip bioclip Alzheimer-MRI
```

The animations will be saved at the location specified by the environment variable `TTI_EVAL_OUTPUT_PATH`.
By default, this path corresponds to the repository directory.
To interactively explore the animation in a temporary session, use the `--interactive` flag.

<div align="center">
<img width="600" src="https://storage.googleapis.com/docs-media.encord.com/static/img/text-to-image-eval/embeddings.gif">
Expand Down Expand Up @@ -377,7 +385,7 @@ To contribute by adding model sources, follow these steps:

## Known Issues

1. `autofaiss`: The project depends on the [autofaiss][autofaiss] library which can give some trouble on windows. Please reach out or raise an issue with as many system and version details as possible if you encounter it.
1. `autofaiss`: The project depends on the [autofaiss][autofaiss] library which can give some trouble on Windows. Please reach out or raise an issue with as many system and version details as possible if you encounter it.

[Falah/Alzheimer_MRI]: https://huggingface.co/datasets/Falah/Alzheimer_MRI
[trpakov/chest-xray-classification]: https://huggingface.co/datasets/trpakov/chest-xray-classification
Expand Down
26 changes: 21 additions & 5 deletions tti_eval/cli/main.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
from typing import Annotated, Optional

import matplotlib.pyplot as plt
from typer import Option, Typer
import typer
from typer import Argument, Option, Typer

from tti_eval.common import Split
from tti_eval.compute import compute_embeddings_from_definition
Expand Down Expand Up @@ -114,18 +115,33 @@ def evaluate_embeddings(
""",
)
def animate_embeddings(
from_model: Annotated[Optional[str], Argument(help="Title of the model in the left side of the animation.")] = None,
to_model: Annotated[Optional[str], Argument(help="Title of the model in the right side of the animation.")] = None,
dataset: Annotated[Optional[str], Argument(help="Title of the dataset where the embeddings were computed.")] = None,
interactive: Annotated[bool, Option(help="Interactive plot instead of animation.")] = False,
reduction: Annotated[str, Option(help="Reduction type [pca, tsne, umap (default)].")] = "umap",
):
from tti_eval.plotting.animation import build_animation, save_animation_to_file
from tti_eval.plotting.animation import EmbeddingDefinition, build_animation, save_animation_to_file

defs = select_existing_embedding_definitions(by_dataset=True, count=2)
res = build_animation(defs[0], defs[1], interactive=interactive, reduction=reduction)
all_none_input_args = from_model is None and to_model is None and dataset is None
all_str_input_args = from_model is not None and to_model is not None and dataset is not None

if not all_str_input_args and not all_none_input_args:
typer.echo("Some arguments were provided. Please either provide all arguments or ignore them entirely.")
raise typer.Abort()

if all_none_input_args:
defs = select_existing_embedding_definitions(by_dataset=True, count=2)
from_def, to_def = defs[0], defs[1]
else:
from_def = EmbeddingDefinition(model=from_model, dataset=dataset)
to_def = EmbeddingDefinition(model=to_model, dataset=dataset)

res = build_animation(from_def, to_def, interactive=interactive, reduction=reduction)
if res is None:
plt.show()
else:
save_animation_to_file(res, *defs)
save_animation_to_file(res, from_def, to_def)


@cli.command("list", help="List models and datasets. By default, only cached pairs are listed.")
Expand Down
4 changes: 3 additions & 1 deletion tti_eval/plotting/reduction.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,9 @@ def get_reduction(
return reduction

elif not embedding_def.embedding_path(split).is_file():
raise ValueError(f"{embedding_def} does not have embeddings stored ({embedding_def.embedding_path(split)})")
raise ValueError(
f"{repr(embedding_def)} does not have embeddings stored ({embedding_def.embedding_path(split)})"
)

image_embeddings: EmbeddingArray = np.load(embedding_def.embedding_path(split))["image_embeddings"]
reduction = cls.reduce(image_embeddings)
Expand Down

0 comments on commit 2390b5f

Please sign in to comment.