Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

maira2 support #1145

Merged
merged 2 commits into from
Feb 13, 2025
Merged

maira2 support #1145

merged 2 commits into from
Feb 13, 2025

Conversation

eaidova
Copy link
Collaborator

@eaidova eaidova commented Feb 5, 2025

What does this PR do?

import requests
from PIL import Image
from transformers import AutoProcessor
from optimum.intel.openvino import OVModelForVisualCausalLM
import torch

def get_sample_data() -> dict[str, Image.Image | str]:
    """
    Download chest X-rays from IU-Xray, which we didn't train MAIRA-2 on. License is CC.
    We modified this function from the Rad-DINO repository on Huggingface.
    """
    frontal_image_url = "https://openi.nlm.nih.gov/imgs/512/145/145/CXR145_IM-0290-1001.png"
    lateral_image_url = "https://openi.nlm.nih.gov/imgs/512/145/145/CXR145_IM-0290-2001.png"

    def download_and_open(url: str) -> Image.Image:
        response = requests.get(url, headers={"User-Agent": "MAIRA-2"}, stream=True)
        return Image.open(response.raw)

    frontal_image = download_and_open(frontal_image_url)
    lateral_image = download_and_open(lateral_image_url)

    sample_data = {
        "frontal": frontal_image,
        "lateral": lateral_image,
        "indication": "Dyspnea.",
        "comparison": "None.",
        "technique": "PA and lateral views of the chest.",
        "phrase": "Pleural effusion."  # For the phrase grounding example. This patient has pleural effusion.
    }
    return sample_data

sample_data = get_sample_data()


model_id = "microsoft/maira-2"
model = OVModelForVisualCausalLM.from_pretrained(model_id, trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/maira-2", trust_remote_code=True)

processed_inputs = processor.format_and_preprocess_reporting_input(
    current_frontal=sample_data["frontal"],
    current_lateral=sample_data["lateral"],
    prior_frontal=None,  # Our example has no prior
    indication=sample_data["indication"],
    technique=sample_data["technique"],
    comparison=sample_data["comparison"],
    prior_report=None,  # Our example has no prior
    return_tensors="pt",
    get_grounding=False,  # For this example we generate a non-grounded report
)

with torch.no_grad():
    output_decoding = model.generate(
        **processed_inputs,
        max_new_tokens=300,  # Set to 450 for grounded reporting
        use_cache=True,
    )
prompt_length = processed_inputs["input_ids"].shape[-1]
decoded_text = processor.decode(output_decoding[0][prompt_length:], skip_special_tokens=True)
decoded_text = decoded_text.lstrip()  # Findings generation completions have a single leading space
prediction = processor.convert_output_to_plaintext_or_grounded_sequence(decoded_text)
print("Parsed prediction:", prediction)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@echarlaix echarlaix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @eaidova! Could you add a test before we can merge ?

@eaidova
Copy link
Collaborator Author

eaidova commented Feb 11, 2025

LGTM, thanks @eaidova! Could you add a test before we can merge ?

@echarlaix I met some issues related to model configuration and compatibility with new transformers. Let me resolve it (at least for small testing model and I hope I'll add tests and docs)

@eaidova
Copy link
Collaborator Author

eaidova commented Feb 11, 2025

@echarlaix I added test and model into supported list. Could you please one more time?

@eaidova
Copy link
Collaborator Author

eaidova commented Feb 12, 2025

@echarlaix could you please help to rerun precommit. I updated model remote code for compatibility with python3.9

@echarlaix
Copy link
Collaborator

@echarlaix could you please help to rerun precommit. I updated model remote code for compatibility with python3.9

Done, looks like one last thing needs to be updated https://huggingface.co/katuni4ka/tiny-random-maira2/blob/main/processing_maira2.py#L347 to fix :

SyntaxError: non-default argument follows default argument

@eaidova
Copy link
Collaborator Author

eaidova commented Feb 13, 2025

@echarlaix could you please help to rerun precommit. I updated model remote code for compatibility with python3.9

Done, looks like one last thing needs to be updated https://huggingface.co/katuni4ka/tiny-random-maira2/blob/main/processing_maira2.py#L347 to fix :

SyntaxError: non-default argument follows default argument

fixed

@echarlaix echarlaix merged commit 9898189 into huggingface:main Feb 13, 2025
19 of 22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants