--- title: Local Interpretability keywords: fastai sidebar: home_sidebar nb_path: "01_local_interpret.ipynb" ---
from misas.fastai_model import Fastai2_model
def label_func(x):
pass
def acc_seg(input, target):
pass
def diceComb(input, targs):
pass
def diceLV(input, targs):
pass
def diceMY(input, targs):
pass
def img():
"""
Opens the sample image as a PIL image
"""
return Image.open("example/kaggle/images/1-frame014-slice005.png").convert("RGB")
#img = lambda: Image.open("example/kaggle/images/1-frame014-slice005.png")
def trueMask():
"""
Opens the true mask as a PIL image
"""
return Image.open("example/kaggle/masks/1-frame014-slice005.png").convert("I")
#trueMask = lambda: Image.open("example/kaggle/masks/1-frame014-slice005.png")
trainedModel = Fastai2_model("chfc-cmi/cmr-seg-tl", "cmr_seg_base")
Define a default color map for the sample image derived from viridis
and a default color map for the true map derived from plasma
but setting the color for class "0" to completely transparent.
This makes sense if class "0" is the background.
series = get_rotation_series(img(), trainedModel, truth=trueMask())
plot_series(series, overlay_truth = True)
results = eval_rotation_series(img(), trueMask(), trainedModel, components=['bg','LV','MY'])
plot_eval_series(results)
You can easily generate gifs by plotting multiple frames
gif_series(
get_rotation_series(img(),trainedModel,start=0,end=360,step=5),
"example/kaggle/rotation.gif",
duration=500,
param_name="degrees"
)
series = get_crop_series(img(), trainedModel, truth=trueMask(), step=10)
plot_series(series,nrow=3,figsize=(16,15), overlay_truth = True) #,overlay_truth=True)
results = eval_crop_series(img(), trueMask(), trainedModel, components=['bg','LV','MY'])
plot_eval_series(results)
Cropping and comparing to the full original mask might not be desired. In this case it is possible to crop the mask as well. All pixels in the cropped area are set to 0 (commonly the background class). As soon as a class is completely missing, the dice score might jump to 1 because not predicting the class is correct in that case.
gif_series(
get_crop_series(img(),trainedModel,start=0,end=256,step=5),
"example/kaggle/crop.gif",
duration=500,
param_name="pixels"
)
series = get_brightness_series(img(), trainedModel, truth=trueMask(), start=1/8, end=16)
plot_series(series, nrow=3, figsize=(12,6), overlay_truth = True)
results = eval_bright_series(img(), trueMask(), trainedModel, start=0, end=1.05, components=['bg','LV','MY'])
plot_eval_series(results)
gif_series(
get_brightness_series(img(), trainedModel, step = np.sqrt(2)),
"example/kaggle/brightness.gif",
duration=500,
param_name="brightness"
)
series = get_contrast_series(img(), trainedModel, truth=trueMask(), start=1/8, end=16)
plot_series(series, nrow=3, figsize=(12,8), overlay_truth = True)
results = eval_contrast_series(img(), trueMask(), trainedModel, start=0.25, end=8, step=np.sqrt(2), components=['bg','LV','MY'])
plot_eval_series(results)
gif_series(
get_contrast_series(img(), trainedModel,step=np.sqrt(2)),
"example/kaggle/contrast.gif",
duration=500,
param_name="contrast"
)
series = get_zoom_series(img(), trainedModel, truth=trueMask())
plot_series(series, nrow=2, figsize=(16,8), overlay_truth = True)
results = eval_zoom_series(img(), trueMask(), trainedModel, components=['bg','LV','MY'])
plot_eval_series(results)
gif_series(
get_zoom_series(img(),trainedModel,start=0,end=1,step=0.1),
"example/kaggle/zoom.gif",
duration=500,
param_name="scale"
)
series = get_dihedral_series(img(), trainedModel, truth=trueMask())
plot_series(series, overlay_truth = True)
results = eval_dihedral_series(img(), trueMask(), trainedModel, components=['bg','LV','MY'])
plot_eval_series(results, chart_type="point")
gif_series(
get_dihedral_series(img(), trainedModel),
"example/kaggle/dihedral.gif",
param_name="k",
duration=1000
)
series = get_resize_series(img(), trainedModel, truth=trueMask())
plot_series(series, sharex=True, sharey=True, overlay_truth = True)
results = eval_resize_series(img(), trueMask(), trainedModel, components=['bg','LV','MY'])
plot_eval_series(results)
gif_series(
get_resize_series(img(), trainedModel),
"example/kaggle/resize.gif",
param_name="px",
duration=500
)
The default score for evaluation is the Dice-Score calculated separately for each component.
In addition to Dice, misas
provides functions for component-wise functions for precision and recall but you can easily define your own.
results_dice = eval_rotation_series(img(), trueMask(), trainedModel, components=['bg','LV','MY'])
plot_dice = plot_eval_series(results_dice, value_vars=['bg','LV','MY'], value_name="Dice Score")
results_precision = eval_rotation_series(img(), trueMask(), trainedModel, components=['bg','LV','MY'], eval_function=precision_by_component)
plot_precision = plot_eval_series(results_precision, value_vars=['bg','LV','MY'], value_name="Precision")
results_recall = eval_rotation_series(img(), trueMask(), trainedModel, components=['bg','LV','MY'], eval_function=recall_by_component)
plot_recall = plot_eval_series(results_recall, value_vars=['bg','LV','MY'], value_name="Recall")
The objects returned by the plot function are altair
graphs that can be further customized and combined
plot_dice = plot_dice.properties(title="Dice")
plot_precision = plot_precision.properties(title="Precision")
plot_recall = plot_recall.properties(title="Recall")
plot_dice & plot_precision & plot_recall
In order to define your own evaluation function you need to define a function with the predicted and true masks as first and second parameter and the component to evaluate as third parameter.
Masks are of type ImageSegment and you can access the tensor data using the .data
property.
This is an example on how to define specificity. This can than be passed as evaluation function.
def specificity_by_component(predictedMask, trueMask, component = 1):
specificity = 1.0
pred = np.array(predictedMask) != component
msk = np.array(trueMask) != component
intersect = pred&msk
total = np.sum(pred) + np.sum(msk)
if total > 0:
specificity = np.sum(intersect).astype(float) / msk.sum()
return specificity.item()
results_specificity = eval_rotation_series(img(), trueMask(), trainedModel, components=['bg','LV','MY'], eval_function=specificity_by_component)
plot_specificity = plot_eval_series(results_specificity, value_vars=['bg','LV','MY'], value_name="Specificity")
plot_specificity
The specificity for the background class degrades so dramatically for rotations of around 180 degrees as the LV and MY classes are no longer detected at all. So there are no "true negatives" for the background class, consequently specificity for that class drops to zero.
Confusion matrices are useful to evaluate in more detail which classes the model gets wrong. To conveniently generate separate confusion matrices or series misas
provides some convenience functions.
series = get_rotation_series(img(), trainedModel, truth=trueMask())
The get_confusion
function returns a two dimensional numpy
array with counts for each class combination.
The true class is along the columns and the predicted class along the rows. The number of classes is derived from the data if not provided as max_class
parameter. This parameter is important if the given instance of prediction and truth does not contain all available classes.
cm = get_confusion(series[0][2], series[0][3])
cm
This matrix shows that there are 754 pixels classified correctly as "LV" (class=1). However, there are also 17 pixels that are in reality "LV" but predicted as "MY". Accordingly, there are 68 pixels that are "MY" but predicted as "LV".
Looking at tables is much less convenient and informative than looking at graphics so let's plot this matrix
_ = plot_confusion(cm, components=["bg","LV","MY"])
This is the confusion matrix for one image. Next we want to look at the confusion matrix for a full series of transformed (in this case rotated) images.
plot_series(series,figsize=(16.5,6))
plot_confusion_series(series, components=['bg','LV','MY'])