You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/source/user-guide.md
+86-10
Original file line number
Diff line number
Diff line change
@@ -224,6 +224,7 @@ explainer = xai.Explainer(
224
224
model,
225
225
task=xai.Task.CLASSIFICATION,
226
226
preprocess_fn=preprocess_fn,
227
+
explain_mode=ExplainMode.WHITEBOX,
227
228
)
228
229
229
230
# Generate and process saliency maps (as many as required, sequentially)
@@ -237,7 +238,6 @@ voc_labels = [
237
238
# Run explanation
238
239
explanation = explainer(
239
240
image,
240
-
explain_mode=ExplainMode.WHITEBOX,
241
241
# target_layer="last_conv_node_name", # target_layer - node after which the XAI branch will be inserted, usually the last convolutional layer in the backbone
242
242
embed_scaling=True, # True by default. If set to True, the saliency map scale (0 ~ 255) operation is embedded in the model
243
243
explain_method=xai.Method.RECIPROCAM, # ReciproCAM is the default XAI method for CNNs
@@ -288,6 +288,7 @@ explainer = xai.Explainer(
288
288
model,
289
289
task=xai.Task.CLASSIFICATION,
290
290
preprocess_fn=preprocess_fn,
291
+
explain_mode=ExplainMode.BLACKBOX,
291
292
)
292
293
293
294
# Generate and process saliency maps (as many as required, sequentially)
targets=[7, 11], # ['cat', 'dog'] also possible as target classes to explain
554
559
)
@@ -616,6 +621,7 @@ explainer = xai.Explainer(
616
621
model,
617
622
task=xai.Task.CLASSIFICATION,
618
623
preprocess_fn=preprocess_fn,
624
+
explain_mode=ExplainMode.WHITEBOX,
619
625
)
620
626
621
627
voc_labels = [
@@ -638,7 +644,6 @@ scores_dict = {i: score for i, score in zip(result_idxs, result_scores)}
638
644
# Run explanation
639
645
explanation = explainer(
640
646
image,
641
-
explain_mode=ExplainMode.WHITEBOX,
642
647
label_names=voc_labels,
643
648
targets=result_idxs, # target classes to explain
644
649
)
@@ -657,6 +662,77 @@ explanation.save(
657
662
) # image_name_aeroplane_conf_0.85.jpg
658
663
```
659
664
665
+
## Measure quiality metrics of saliency maps
666
+
667
+
To compare different saliency maps, you can use the implemented quality metrics: Pointing Game, Insertion-Deletion AUC, and ADCC.
668
+
669
+
-**ADCC (Average Drop-Coherence-Complexity)** ([paper](https://arxiv.org/abs/2104.10252)/[impl](https://github.com/aimagelab/ADCC/)) - averages three submetrics:
670
+
-**Average Drop** - The percentage drop in confidence when the model sees only the explanation map (image masked with the saliency map) instead of the full image.
671
+
-**Coherence** - The coherency between the saliency map on the input image and saliency map on the explanation map (image masked with the saliency map). Requires generating an extra explanation (can be time-consuming for black box methods).
672
+
-**Complexity** - Measures the L1 norm of the saliency map (average value per pixel). Fewer important pixels -> less complexity -> better saliency map.
673
+
674
+
-**Insertion-Deletion AUC** ([paper](https://arxiv.org/abs/1806.07421)) - Measures the AUC of the curve of model confidence when important pixels are sequentially inserted or deleted. Time-consuming, requires 60 model inferences: 30 steps of the insertion and deletion process.
675
+
676
+
-**Pointing Game** ([paper](https://arxiv.org/abs/1608.00507)/[impl](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/quantus/metrics/localisation/pointing_game.py)) - Returns True if the most important saliency map pixel falls into the object ground truth bounding box. Requires ground truth annotation, so it is convenient to use on public datasets (COCO, VOC, ILSVRC) rather than individual images (check [accuracy_tests](../../tests/perf/test_accuracy.py) for examples).
677
+
678
+
679
+
```python
680
+
import cv2
681
+
import numpy as np
682
+
import openvino.runtime as ov
683
+
from typing import Mapping
684
+
685
+
import openvino_xai as xai
686
+
from openvino_xai.explainer import ExplainMode
687
+
from openvino_xai.metrics importADCC, InsertionDeletionAUC
Measures the coherency of the saliency map. The explanation map (image masked with saliency map) should contain all the relevant features that explain a prediction and should remove useless features in a coherent way.
57
+
Measures the coherency of the saliency map. The explanation map (image masked with saliency map) should
58
+
contain all the relevant features that explain a prediction and should remove useless features in a coherent way.
56
59
Saliency map and saliency map of exlanation map should be similar.
0 commit comments