-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WSI Process CLI Not Working #18
Comments
If you are able to provide any help with my use wsiprocess it would be appreciated. I haven't been able to get any classified patches to render. I'm using the generic TIF image from Openslide (CMU-1). I've generated annotations with ASAP. (attached) Below are the config and methods from wsiprocess I'm using. I do get "foreground" patches (although I've tried to block them.) http://openslide.cs.cmu.edu/download/openslide-testdata/Generic-TIFF/CMU-1.tiff slide = wp.slide(slide_path) annotation.make_masks(slide, rule) You project is very well structured and I'd like to figure out how to use it. Thanks, |
@bnapora
This is absolutely a bug, and I fixed it. For now, please update the source code and install wsiprocess by
Actually, I changed the behavior of the extraction of the foreground class when used from command line. For my research, I really needed to get patches from foreground which excludes benign area.
I think the benign or malignant area of your annotation is very small, and does not meet the conditions of on_annotation or on_foreground. "-et" or "--export_thumbs" from command line would be very useful to visualize the area to be the target of extraction. Can you try with on_foreground=0.000001, on_annotation=0.000001? P.S. Sincerely, |
Takumi, Thanks very much for the help. The fix worked for running wsiprocess from CLI. Was able to extract patches and also convert to COCO style (which is my goal.) I am able to block "foreground" patches when not including it in the patcher.get_patch_parallel(classes) command. Also, I was able to get my class annotations ('benign') to extract by setting on_foreground & on_annotation to very low value. I do seem to be getting a new issue now (not sure if its related to latest change). When I attempt to add a rule, the process hangs and I have to shut it down. I can find no error message to indicate. I am using a conda environment, but install everything in the environment with pip (this has worked ok for me.) Error occurs when run: annotation.make_masks(slide, rule) I will experiment more with a rule file and see if I can get it to load via CLI. Thanks again, Brian |
Takumi, I also wanted to find out if there is a trick to getting a COCO output for "dot" style annotations in ASAP? I attempted to do an COCO conversion on an annotation file with only "dots" and didn't get any "benign" patches. Also, is there a way to block "foreground" patches and json from a COCO style output using CLI? Brian |
@bnapora I'm glad that you could get what you wanted!
It might be a problem around pyvips. Can you give me more information about your environment?
I did not implement a extraction function for dot annotations because I think there is no dot style annotation in COCO format. For example, what kind of output do you want? Dot annotations are popular in digital pathology (like Mitos-Atypia), so I'd like to know how the style should be.
I didn't noticed that COCO style outputs have foreground class. I added "-ef" or "--extract_foreground" to CLI. Can you try with the argument? Thanks, Takumi |
Regarding Dot annotations, we were thinking an "artificial BBox automatically generated around the dot. For example a 60 x 60 BBox around the location of the dot would be the "anchor", and the patch would be "padded" 10 - 20 pixels around the anchor/BBox. Does that make sense? I haven't tried the "-ef" tag, but will test. Brian |
@bnapora Do you mean patches like sample 1? If you want to classify patches which only show a single cell (or something), I think this is enough. (But, you need additional cell localization methods to get the coordinates...) import pyvips
coordinates = [[100, 200], [200, 300]]
bbox_width = 100
bbox_height = 100
slide = pyvips.Image.new_from_file(slidepath)
for (x, y) in coordinates:
slide.crop(x - bbox_width/2, y - bbox_height/2, bbox_width, bbox_height).jpgsave(f"{x}_{y}.jpg") How about patches like sample 2? If you want to detect cells in patches, I think this is okay, and I implemented this. Can you try with the latest code and command below? wsiprocess classification xxx.tiff xxx.xml -dw 100 -dh 150
# or
wsiprocess detection xxx.tiff xxx.xml -dw 100 -dh 150 If you want to do some segmentation tasks(sample 3), I need to think more about contour detection. Are these the answer you were hoping for? Takumi |
Takumi, This is fantastic. Latest testing and comments are: 1.) Extract Foreground Switch in COCO Output (-ef) - test and this works. Comments on types of Dot Annotation Patch Generation and Annotation b.) Sample 2 - this is the model you just implemented. Classified patch generating routine works accurately to identify patches with a dot anno somewhere in the region. Below is an example of tiny datset imported to Pytorch/fastai Datablock: c.) Sample 3 (Segmentation) - I hadn't even gotten this far in my thinking but would be a fantastic addition. Option to to generate patch for each identified structure surrounding a dot anno (or at the center of a circle/square anno) would incredibly powerful. Below is an example I mocked up on a PDL-1 IHC: Your work on this tool is very good. I noticed you had started work on an alternative to ASAP...called WSIDissector. Have you ever taken a look at SlideRunner (https://github.com/DeepPathology/SlideRunner)? It's great tool made by some folks at a lab in Germany. They even have a collaboration tool for enabling multiple expert annotators to contribute to a project (called Exact.) Its annotation output is stored in a SQLITE db and very easy to access and manage. How difficult do you think it would be to import annotations from SlideRunner? Brian |
@bnapora Thank you for the comments! a.) I opened an issue for cropping the bounding boxes. c.) I opened an issue for segmentation mask. If you have some time to give me suggestions, please have a look at it. SlideRunner ) WSIDissector ) Takumi |
CLI works now. |
Hi...attempting to use WSIPROCESS from command line but can't get it working. Works (without openslide) in a script, but from CLI get error. Below is outline of issue. Any help you can provide would be appreciated.
Instructions:
wsiprocess [your method] xxx.tiff xxx.xml
My CLI command:
wsiprocess detection ./sample/CMU-1.tif ./sample/CMU-1_detection.xml -pw 256 -ph 256 -ow 1 -oh 1
Error:
Traceback (most recent call last):
File "/home/bnapora/miniconda3/envs/wsiprocess/bin/wsiprocess", line 8, in
sys.exit(main())
File "/home/bnapora/miniconda3/envs/wsiprocess/lib/python3.6/site-packages/wsiprocess/cli.py", line 186, in main
rule = wp.rule(args.rule) if hasattr(args, "rule") else False
File "/home/bnapora/miniconda3/envs/wsiprocess/lib/python3.6/site-packages/wsiprocess/rule.py", line 53, in init
with open(path, "r") as f:
TypeError: expected str, bytes or os.PathLike object, not NoneType
The text was updated successfully, but these errors were encountered: