Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WSI Process CLI Not Working #18

Closed
bnapora opened this issue Oct 9, 2020 · 10 comments
Closed

WSI Process CLI Not Working #18

bnapora opened this issue Oct 9, 2020 · 10 comments

Comments

@bnapora
Copy link

bnapora commented Oct 9, 2020

Hi...attempting to use WSIPROCESS from command line but can't get it working. Works (without openslide) in a script, but from CLI get error. Below is outline of issue. Any help you can provide would be appreciated.

Instructions:
wsiprocess [your method] xxx.tiff xxx.xml

My CLI command:
wsiprocess detection ./sample/CMU-1.tif ./sample/CMU-1_detection.xml -pw 256 -ph 256 -ow 1 -oh 1

Error:
Traceback (most recent call last):
File "/home/bnapora/miniconda3/envs/wsiprocess/bin/wsiprocess", line 8, in
sys.exit(main())
File "/home/bnapora/miniconda3/envs/wsiprocess/lib/python3.6/site-packages/wsiprocess/cli.py", line 186, in main
rule = wp.rule(args.rule) if hasattr(args, "rule") else False
File "/home/bnapora/miniconda3/envs/wsiprocess/lib/python3.6/site-packages/wsiprocess/rule.py", line 53, in init
with open(path, "r") as f:
TypeError: expected str, bytes or os.PathLike object, not NoneType

@bnapora
Copy link
Author

bnapora commented Oct 10, 2020

If you are able to provide any help with my use wsiprocess it would be appreciated. I haven't been able to get any classified patches to render. I'm using the generic TIF image from Openslide (CMU-1). I've generated annotations with ASAP. (attached) Below are the config and methods from wsiprocess I'm using. I do get "foreground" patches (although I've tried to block them.)

http://openslide.cs.cmu.edu/download/openslide-testdata/Generic-TIFF/CMU-1.tiff

slide = wp.slide(slide_path)
annotation = wp.annotation(annot)
annotation.read_annotation('ASAP')
rule = wp.rule(rule_path)
classes = annotation.classes

annotation.make_masks(slide, rule)
patcher = wp.patcher(slide, "detection", annotation, save_to=path_output, patch_height=512, patch_width=512, finished_sample=False, on_foreground=0.02, on_annotation=0.8, start_sample=False, no_patches=False)
patcher.get_patch_parallel(classes)

image

image

You project is very well structured and I'd like to figure out how to use it.

Thanks,
Brian

@tand826
Copy link
Owner

tand826 commented Oct 15, 2020

@bnapora
Thank you for the question Brian, and I'm sorry to have kept you waiting.

wsiprocess detection ./sample/CMU-1.tif ./sample/CMU-1_detection.xml -pw 256 -ph 256 -ow 1 -oh 1

This is absolutely a bug, and I fixed it. For now, please update the source code and install wsiprocess by pip install -e ..
Without the rule file, wsiprocess will extract patches of all the classes you have annotated. If you want to exclude some classes to extract, make rule file like sample/rule.json.

I do get "foreground" patches (although I've tried to block them.)

Actually, I changed the behavior of the extraction of the foreground class when used from command line. For my research, I really needed to get patches from foreground which excludes benign area.
If you want to block foreground area, run from python script, and do like below.

patcher.get_patch_parallel(["benign", "malignant"])  # exclude foreground here.

I haven't been able to get any classified patches to render.

I think the benign or malignant area of your annotation is very small, and does not meet the conditions of on_annotation or on_foreground. "-et" or "--export_thumbs" from command line would be very useful to visualize the area to be the target of extraction. Can you try with on_foreground=0.000001, on_annotation=0.000001?

P.S.
I'm assuming you are using miniconda as your python package manager, and installed wsiprocess from pip.
If you have trouble around conda and pip, feel free to make issue.

Sincerely,
Takumi

@bnapora
Copy link
Author

bnapora commented Oct 15, 2020

Takumi,

Thanks very much for the help. The fix worked for running wsiprocess from CLI. Was able to extract patches and also convert to COCO style (which is my goal.)

I am able to block "foreground" patches when not including it in the patcher.get_patch_parallel(classes) command.

Also, I was able to get my class annotations ('benign') to extract by setting on_foreground & on_annotation to very low value.

I do seem to be getting a new issue now (not sure if its related to latest change). When I attempt to add a rule, the process hangs and I have to shut it down. I can find no error message to indicate. I am using a conda environment, but install everything in the environment with pip (this has worked ok for me.)

Error occurs when run: annotation.make_masks(slide, rule)
-there is no error and have to kill process

I will experiment more with a rule file and see if I can get it to load via CLI.

Thanks again,

Brian

@bnapora
Copy link
Author

bnapora commented Oct 15, 2020

Takumi,

I also wanted to find out if there is a trick to getting a COCO output for "dot" style annotations in ASAP? I attempted to do an COCO conversion on an annotation file with only "dots" and didn't get any "benign" patches.

Also, is there a way to block "foreground" patches and json from a COCO style output using CLI?

Brian

@tand826
Copy link
Owner

tand826 commented Oct 16, 2020

@bnapora
Brian,

I'm glad that you could get what you wanted!

Error occurs when run: annotation.make_masks(slide, rule)

It might be a problem around pyvips. Can you give me more information about your environment?

  • OS (important)
  • libvips version (important)
  • RAM (not so important, but useful)

COCO output for "dot" style annotations in ASAP

I did not implement a extraction function for dot annotations because I think there is no dot style annotation in COCO format. For example, what kind of output do you want? Dot annotations are popular in digital pathology (like Mitos-Atypia), so I'd like to know how the style should be.

Also, is there a way to block "foreground" patches and json from a COCO style output using CLI?

I didn't noticed that COCO style outputs have foreground class. I added "-ef" or "--extract_foreground" to CLI. Can you try with the argument?

Thanks,

Takumi

@bnapora
Copy link
Author

bnapora commented Oct 16, 2020

Regarding Dot annotations, we were thinking an "artificial BBox automatically generated around the dot. For example a 60 x 60 BBox around the location of the dot would be the "anchor", and the patch would be "padded" 10 - 20 pixels around the anchor/BBox. Does that make sense?

I haven't tried the "-ef" tag, but will test.

Brian

@tand826
Copy link
Owner

tand826 commented Oct 16, 2020

@bnapora
Brian,

Do you mean patches like sample 1? If you want to classify patches which only show a single cell (or something), I think this is enough. (But, you need additional cell localization methods to get the coordinates...)

import pyvips

coordinates = [[100, 200], [200, 300]]
bbox_width = 100
bbox_height = 100
slide = pyvips.Image.new_from_file(slidepath)
for (x, y) in coordinates:
    slide.crop(x - bbox_width/2, y - bbox_height/2, bbox_width, bbox_height).jpgsave(f"{x}_{y}.jpg")

1


How about patches like sample 2? If you want to detect cells in patches, I think this is okay, and I implemented this. Can you try with the latest code and command below?

wsiprocess classification xxx.tiff xxx.xml -dw 100 -dh 150
# or
wsiprocess detection xxx.tiff xxx.xml -dw 100 -dh 150

2


If you want to do some segmentation tasks(sample 3), I need to think more about contour detection.
3

Are these the answer you were hoping for?
Thank you very much for the idea!

Takumi

@bnapora
Copy link
Author

bnapora commented Oct 16, 2020

Takumi,

This is fantastic. Latest testing and comments are:

1.) Extract Foreground Switch in COCO Output (-ef) - test and this works.
2.) Extract Dot Annotations (wsiprocess detection xxx.tiff xxx.xml -dw 100 -dh 150)
-tested and this worked. Now able to generate COCO output with the dot annotations layered somewhere on the patch. Cool!

Comments on types of Dot Annotation Patch Generation and Annotation
a.) Sample 1 - this is exactly the output we had in mind. Centering the patch itself around the dot annotation. If multiple dots/points exist in "patch" region, then generating a patch for each point. There is the challenge of cell localization (as you mentioned.) I attempted to accomplish with libvips alone but was unsuccessful. If there is way to implement that you can see, it would be very helpful. Possibly with an additional switch that enables user to indicate whether single patch for each annotation is desired for output, or if single patch for multiple annotations is desired output.

b.) Sample 2 - this is the model you just implemented. Classified patch generating routine works accurately to identify patches with a dot anno somewhere in the region. Below is an example of tiny datset imported to Pytorch/fastai Datablock:
image

c.) Sample 3 (Segmentation) - I hadn't even gotten this far in my thinking but would be a fantastic addition. Option to to generate patch for each identified structure surrounding a dot anno (or at the center of a circle/square anno) would incredibly powerful. Below is an example I mocked up on a PDL-1 IHC:
image

Your work on this tool is very good. I noticed you had started work on an alternative to ASAP...called WSIDissector. Have you ever taken a look at SlideRunner (https://github.com/DeepPathology/SlideRunner)? It's great tool made by some folks at a lab in Germany. They even have a collaboration tool for enabling multiple expert annotators to contribute to a project (called Exact.) Its annotation output is stored in a SQLITE db and very easy to access and manage. How difficult do you think it would be to import annotations from SlideRunner?

Brian

@tand826
Copy link
Owner

tand826 commented Oct 17, 2020

@bnapora
Brian,

Thank you for the comments!

a.) I opened an issue for cropping the bounding boxes.

c.) I opened an issue for segmentation mask. If you have some time to give me suggestions, please have a look at it.

SlideRunner )
I opened an issue for SlideRunner parser.

WSIDissector )
WSIDissector is a slide viewer and deep learning methods experimental application based on Leaflet.js and PyTorch, and it is now available only for myself. I stopped developing because I thought it is too time-consuming to support this kind of application. It takes some time to publish the repository or never to be published...

Takumi

@tand826
Copy link
Owner

tand826 commented Oct 19, 2020

CLI works now.

@tand826 tand826 closed this as completed Oct 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants