-
-
Notifications
You must be signed in to change notification settings - Fork 2
Learning Objectives
lauracmurphy edited this page Jul 24, 2025
·
4 revisions
These objectives were collectively agreed upon through the GloBIAS-Carpentries workgroup.
The objectives below should be phrased in the perspective of the learner:
By the end of the episode, learners will be able to …
- Learners will be able to explain the need for image processing and analysis in the context of biological research
- Learners will be able to explain the concept of a workflow in image processing and analysis and list common building blocks.
- Learners will be able to describe the main differences between typical fluorescence bioimages and scientific (like histological staining) and non-scientific (like camera pictures) RGB images.
- Learners will be able to explain how some aspects of image origin and formation can influence downstream analysis
- Learners will be able to load images into Python represented as n-dimensional arrays via BioIO
- Learners will be able to display pixel values from a NumPy array
- Learners will be able to extract and interpret an image's shape
- Learners will be able to find an image's min, max, and mean pixel value
- Learners will be able to explain the significance of bit depth in images and how it affects image quality and memory footprint
- Learners will be able to explain how bit depth and data type (dtype) determine the range of valid pixel values in an image.
- Learners will be able to provide examples of valid values for different NumPy array data type.
- Learners will be able to explain the link between bit depth, data type and possible values present in an image
- Learners will be able to present examples of bit depth, data type, shape and values for common bioimages.
- Learners will be able to find relevant documentation to load different proprietary file formats
- Learners will be able to identify common proprietary microscopy file formats and understand how tools like BioIO support working with these formats
- Learners will be able to extract relevant information from image metadata (channel names, stage positions, time frames…)
- Learners will be able to extract physical units from image metadata (ZYX and time) with BioIO
- Learners will be able to manually estimate the size of an object in a bioimage in pixel size and physical units.
- Learners will be able to create and visualize a histogram with Matplotlib
- Learners will be able to identify common problems with image quality by looking at histograms (saturation, clipping, dynamic range)
- Learners will be able to explain the relationship between pixel intensity values, color maps (LUT) and physical fluorescence channels.
- Learners will be able to describe how adjusting image display settings, including brightness, contrast, applying color maps, and windowing, affects the data
- Learners will be able to distinguish between linear and non-linear adjustments such as gamma correction
- Learners will be able to explain the importance of slicing, subsampling, and projections in image analysis.
- Learners will be able to use image coordinates to access individual values and slices of NumPy arrays.
- Learners will be able to select a subset of time frames of NumPy arrays using slicing
- Learners will be able to generate simple projections of NumPy arrays (max, time)
Visualization with Matplotlib
- Learners will be able to visualize a single 2D NumPy array with Matplotlib
- Learners will be able to display images using different Matplotlib colormaps (i.e. LUTs) and contrast settings.
- Learners will be able to display several images next to each other with Matplotlib subplots
- Learners will be able to add a colorbar to a displayed image with Matplotlib
Multi-dimensional Image Visualization with Napari
- Learners will be able to open a Napari window and add one or more (multi-dimensional) images to it
- Learners will be able to apply histogram normalization and equalization to facilitate downstream image segmentation
- Learners will be able to choose when normalization and equalization is appropriate (batch normalization of intensities)
- Learners will be able to explain the concept of a filter and its effects on the image data (theoretical explanation of kernels)
- Learners will be able to define the concept of noise and background in images
- Learners will be able to apply techniques like gaussian, mean or median filtering to remove noise from an image
- Learners will be able to use edge filter (or DoG/LoG filter) to highlight object edges (or spots) in an image
- Learners will be able to choose between semantic and instance segmentation, and detection, according to the goals of an analysis
- Learners will be able to compare manual and algorithmic thresholding techniques with their implications for reproducibility
- Learners will be able to identify connected pixels in images to identify objects of interest
- Learners will be able to apply labeling techniques using connected components
- Learners will be able to visualize labeled objects as an overlay of raw data
- Learners will be able to utilize post-processing filters such as watershed, fill holes, opening/closing techniques, and removing objects touching image borders to improve objects segmentation for later measurements
- Learners will be able to use
regionprops
from scikit-image to extract object-level measurements such as area, count, centroid, and circularity
- Learners will be able to measure the mean and standard deviation of an object's intensity
- Learners will be able to export processed data for further analysis as
csv
/txt
files - Learners will be able to export result images as TIF files using BioIO
- Learners will be able to explain the concept of validation in image processing
- Learners will be able to critically assess the segmentation results using appropriate validation techniques
- Learners will be able to combine multiple processing steps into a reproducible workflow
- Learners will be able to apply an existing workflow to several images in a batch analysis