Note
Go to the end to download the full example code. or to run this example in your browser via Binder
Plotting images with transparent thresholding¶
Standard thresholding means that any data below a threshold value is completely hidden from view. However, “transparent thresholding” allows for the same suprathreshold results to be observed, while also showing subthreshold information with an opacity that fades with decreasing magnitude (Allen et al.[1], Chen et al.[2], Taylor et al.[3], Sundermann et al.[4]).
This makes use of the “alpha” value that overlays can have
when using some plotting functions (like matplotlib.pyplot.imshow
).
This “alpha” value goes from 0 (perfectly transparent) to 1 (perfectly opaque).
Consider having an underlay color
and overlay color
,
where a threshold value T is applied
and we consider how an element (voxel, node…) with value M is shown.
“Opaque” thresholding¶
if ,
meaning the overlay is shown as:
else :
, meaning the underlay is shown as:
“Transparent” thresholding¶
The steepness of fading can be linear or quadratic. Linear is shown below.
If ,
meaning the overlay is shown as:
.
Otherwise ,
merging
and
as:
.
In the end, this is just a small tweak for the case of subthreshold data. In that case, alpha is nonzero (rather than simply 0).
Additionally, a contour can be placed around the suprathreshold regions, to further highlight them.
So, the differences between standard, opaque thresholding and the suggested transparent thresholding is shown in the rest of this example.
Benefits
Implementing transparent thresholding can help provide more informative results reporting and more accurate interpretations, can facilitate quality control and improve reproducibility checks.
import matplotlib.pyplot as plt
from nilearn import datasets
from nilearn.plotting import plot_stat_map, show
Load the image we will use to demonstrate.
Let’s use some slightly different plotting parameters that should work better with transparency plotting. For example, let’s pick a diverging colormap that diverges from black and not from white as the default colormap does.
vmin = 0.5
threshold = 3
figure_width = 8
plotting_config = {
"display_mode": "ortho",
"cut_coords": [5, -26, 21],
"draw_cross": False,
"vmax": 8,
"cmap": "cold_hot",
}
Comparing transparent and opaque thresholding¶
Here we use the motor activation image itself to give us the values
to use for transparency.
We can set transparency_range
to [0.5, 3]
to range of values where transparency will be ‘enabled’.
Values below 0.5 will be fully transparent
while values above 3 will be fully opaque.
plotting_config = {
"display_mode": "ortho",
"cut_coords": [5, -26, 21],
"draw_cross": False,
"vmax": 8,
"cmap": "cold_hot",
}
fig, axes = plt.subplots(
4,
1,
figsize=(figure_width, 17),
)
plot_stat_map(
image,
title="image without threshold",
axes=axes[0],
**plotting_config,
)
plot_stat_map(
image,
title="opaque thresholding",
threshold=threshold,
axes=axes[1],
**plotting_config,
)
plot_stat_map(
image,
title="transparent thresholding",
transparency=image,
axes=axes[2],
**plotting_config,
)
plot_stat_map(
image,
title="transparent thresholding with range",
transparency=image,
transparency_range=[vmin, threshold],
axes=axes[3],
**plotting_config,
)
show()

/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/plotting/img_plotting.py:337: UserWarning: resampling transparency image to data image...
display.add_overlay(
/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/plotting/img_plotting.py:337: UserWarning: resampling transparency image to data image...
display.add_overlay(
Transparent thresholding and contours¶
If you want to visualize the limit where the transparency starts,
you can add contours at the right threshold
by using
the add_contours
method.
fig, axes = plt.subplots(figsize=(figure_width, 4))
display = plot_stat_map(
image,
title="transparent thresholding with contour",
transparency=image,
transparency_range=[vmin, threshold],
axes=axes,
**plotting_config,
)
display.add_contours(
image, filled=False, levels=[-threshold, threshold], colors=["k", "k"]
)
show()

/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/plotting/img_plotting.py:337: UserWarning: resampling transparency image to data image...
display.add_overlay(
Transpoarent masking of part of the data¶
You may want to use transparent masking to highlight specific parts of the brain while leaving other parts partly visible.
For example, you could highlight the gray matter and leave values in the rest of the brain partly transparent.
Let’s fetch a beta image of auditory localizer from a single subject.
from nilearn.datasets import fetch_localizer_contrasts
auditory_image = fetch_localizer_contrasts(
contrasts=["left auditory click"], verbose=0, n_subjects=1
)
auditory_image = auditory_image.cmaps[0]
The let’s create our transparency image to leave gray matter opaque and make the white matter partly transparent.
import numpy as np
from nibabel import Nifti1Image
from nilearn.datasets import load_mni152_gm_mask, load_mni152_wm_mask
white_matter_image = load_mni152_wm_mask(threshold=0.35)
white_matter_mask = white_matter_image.get_fdata() > 0
grey_matter_image = load_mni152_gm_mask(threshold=0.6)
grey_matter_mask = grey_matter_image.get_fdata() > 0
transparency_data = np.zeros(grey_matter_image.shape)
transparency_data[white_matter_mask] = 0.6
transparency_data[grey_matter_mask] = 1
transparency_image = Nifti1Image(transparency_data, grey_matter_image.affine)
Create the plot.
fig, axes = plt.subplots(
2,
1,
figsize=(figure_width, 8),
)
plotting_config = {
"display_mode": "ortho",
"cut_coords": [5, -26, 21],
"draw_cross": False,
"cmap": "cold_hot",
}
display = plot_stat_map(
auditory_image,
title="auditory localizer - no thresholding",
axes=axes[0],
**plotting_config,
)
display = plot_stat_map(
auditory_image,
title="auditory localizer - highlight gray matter",
transparency=transparency_image,
axes=axes[1],
**plotting_config,
)
show()

/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/plotting/img_plotting.py:1547: UserWarning: Non-finite values detected. These values will be replaced with zeros.
safe_get_data(stat_map_img, ensure_finite=True),
/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/plotting/img_plotting.py:337: UserWarning: resampling transparency image to data image...
display.add_overlay(
Note that the transparency image was automatically resampled to the underlying data.
Tansparent thresholding with other functions¶
Several plotting functions support transparency including
plot_glass_brain
,
plot_stat_map
and
plot_img
.
See below an example with plot_glass_brain
.
from nilearn.plotting import plot_glass_brain
plotting_config = {
"colorbar": True,
"cmap": "inferno",
}
fig, axes = plt.subplots(
4,
1,
figsize=(figure_width, 17),
)
plot_glass_brain(
image,
title="image without threshold",
axes=axes[0],
**plotting_config,
)
plot_glass_brain(
image,
title="opaque thresholding",
threshold=threshold,
axes=axes[1],
**plotting_config,
)
plot_glass_brain(
image,
title="transparent thresholding",
transparency=image,
axes=axes[2],
**plotting_config,
)
plot_glass_brain(
image,
title="transparent thresholding with range",
transparency=image,
transparency_range=[vmin, threshold],
axes=axes[3],
**plotting_config,
)
show()

/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/plotting/img_plotting.py:337: UserWarning: resampling transparency image to data image...
display.add_overlay(
/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/plotting/img_plotting.py:337: UserWarning: resampling transparency image to data image...
display.add_overlay(
Transparent thresholding on GLM results¶
You can also use different images as ‘transparency’ layer.
For example, on the output of a GLM, you can visualize the contrast values and use their z-score as transparency.
We will show this on a simple block paradigm GLM.
See also
For more information see the dataset description.
In the following section we :
download the data,
fit the GLM with some smoothing of the data,
compute the contrast for the only condition present in this dataset,
compute the mean image of the functional data, to use as underlay for our plots.
from nilearn.datasets import fetch_spm_auditory
from nilearn.glm import threshold_stats_img
from nilearn.glm.first_level import FirstLevelModel
from nilearn.image import mean_img
from nilearn.plotting import plot_stat_map, show
subject_data = fetch_spm_auditory(verbose=0)
fmri_glm = FirstLevelModel(
t_r=7,
smoothing_fwhm=4,
noise_model="ar1",
standardize=False,
hrf_model="spm",
drift_model="cosine",
high_pass=0.01,
)
fmri_glm = fmri_glm.fit(subject_data.func, subject_data.events)
results = fmri_glm.compute_contrast("listening", output_type="all")
mean_img = mean_img(subject_data.func[0], copy_header=True)
[fetch_spm_auditory] Data absent, downloading...
[fetch_single_file] Downloading data from
https://www.fil.ion.ucl.ac.uk/spm/download/data/MoAEpilot/MoAEpilot.bids.zip ...
[_chunk_report_] Downloaded 4448256 of 30176409 bytes (14.7%%, 5.9s
remaining)
[_chunk_report_] Downloaded 10240000 of 30176409 bytes (33.9%%, 3.9s
remaining)
[_chunk_report_] Downloaded 14860288 of 30176409 bytes (49.2%%, 3.1s
remaining)
[_chunk_report_] Downloaded 19693568 of 30176409 bytes (65.3%%, 2.2s
remaining)
[_chunk_report_] Downloaded 24657920 of 30176409 bytes (81.7%%, 1.1s
remaining)
[_chunk_report_] Downloaded 29843456 of 30176409 bytes (98.9%%, 0.1s
remaining)
[fetch_single_file] ...done. (7 seconds, 0 min)
[uncompress_file] Extracting data from
/home/runner/nilearn_data/spm_auditory/MoAEpilot.bids.zip...
[uncompress_file] .. done.
Let’s set some common configuration for our plots.
We will look at activations only
so we set vmin
to 0 and use a sequential colormap (inferno
).
plotting_config = {
"bg_img": mean_img,
"display_mode": "z",
"cut_coords": [9, 42, 75],
"black_bg": True,
"vmin": 0,
"cmap": "inferno",
}
Here we will:
have a look at the statistical value for our contrast,
have a look at their Z score with opaque contrast,
use the Z score as transparency value,
finally we will threshold the Z-score to identify the significant clusters (fdr=0.05, 500 voxels) and plot those as contours.
fig, axes = plt.subplots(
4,
1,
figsize=(figure_width, 18),
)
plot_stat_map(
results["stat"],
title="contrast value",
axes=axes[0],
**plotting_config,
)
plot_stat_map(
results["z_score"],
title="z-score, opaque threshold",
threshold=3,
axes=axes[1],
**plotting_config,
)
plot_stat_map(
results["stat"],
title="contrast value, z-score as transparency",
axes=axes[2],
transparency=results["z_score"],
**plotting_config,
)
display = plot_stat_map(
results["stat"],
title="contrast value, z-score as transparency, contoured clusters",
axes=axes[3],
transparency=results["z_score"],
**plotting_config,
)
clean_map, threshold = threshold_stats_img(
results["z_score"],
alpha=0.05,
height_control="fdr",
cluster_threshold=500,
two_sided=False,
)
display.add_contours(clean_map, filled=False, levels=[threshold], colors=["w"])
show()

/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/plotting/img_plotting.py:337: UserWarning: resampling transparency image to data image...
display.add_overlay(
/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/plotting/img_plotting.py:337: UserWarning: resampling transparency image to data image...
display.add_overlay(
/home/runner/work/nilearn/nilearn/.tox/doc/lib/python3.9/site-packages/nilearn/plotting/displays/_axes.py:91: UserWarning: No contour levels were found within the data range.
im = getattr(ax, type)(
References¶
Total running time of the script: (0 minutes 40.374 seconds)
Estimated memory usage: 1221 MB