This is the official implementation of the paper titled "Taxonomy-guided routing in capsule network for hierarchical image classification" by Khondaker Tasrif Noor, Wei Luo, Antonio Robles-Kelly, Leo Yu Zhang, and Mohamed Reda Bouadjenek. The paper is available on Knowledge-Based Systems.
Hierarchical multi-label classification in computer vision presents significant challenges in maintaining consistency across different levels of class granularity while capturing fine-grained visual details. This paper presents Taxonomy-aware Capsule Network (HT-CapsNet), a novel capsule network architecture that explicitly incorporates taxonomic relationships into its routing mechanism to address these challenges. Our key innovation lies in a taxonomy-aware routing algorithm that dynamically adjusts capsule connections based on known hierarchical relationships, enabling more effective learning of hierarchical features while enforcing taxonomic consistency. Extensive experiments on six benchmark datasets, including Fashion-MNIST, Marine-Tree, CIFAR-10, CIFAR-100, CUB-200-2011, and Stanford Cars, demonstrate that HT-CapsNet significantly outperforms existing methods across various hierarchical classification metrics. Notably, on CUB-200-2011, HT-CapsNet achieves absolute improvements of
The network consists of a feature extraction backbone, and for each hierarchical level
The routing process between capsules (
This code is developed and tested with Python 3.8.10
and TensorFlow 2.8.0
. You can create a virtual environment and install the required packages using the following command:
conda env create --file conda_env.yml
The datasets used in this project are publicly available. The src\hierarchical_dataset.py
file contains the code to download and preprocess the datasets. The datasets are as follows:
- Fashion-MNIST
- Marine-Tree
- CIFAR-10
- CIFAR-100
- CUB-200-2011
- Stanford Cars
Note: Additional datasets can be added by modifying the src\hierarchical_dataset.py
file. The datasets should be in the same format as the existing datasets.
To train the model, run the main.ipynb
notebook. The notebook contains the code to train the model on the specified dataset. You can modify the hyperparameters and other settings in the notebook.
The args_dict{}
dictionary contains the hyperparameters and other settings for the training process. You can modify the following parameters in the args_dict{}
dictionary:
dataset
: The dataset to be used for training. Options areFashion-MNIST
,Marine-Tree
,CIFAR-10
,CIFAR-100
,CUB-200-2011
, andStanford Cars
.batch_size
: The batch size for training.epochs
: The number of epochs for training.Routing_N
: The number of routing iterations. Check theargs_dict{}
dictionary in themain.ipynb
notebook for more hyperparameters and settings.
To execute the notebook, you can use Jupyter Notebook or Jupyter Lab. If you don't have Jupyter installed, you can install it using the following command:
pip install jupyter
You can then run the notebook by opening it in Jupyter Notebook or Jupyter Lab and executing the cells one by one.
Alternatively, you can run the notebook from the command line using papermill
. If you don't have papermill
installed, you can install it using the following command:
pip install papermill
Then, you can run the notebook using the following command:
papermill main.ipynb main_output.ipynb -p args_dict '{
"dataset": "CIFAR-10",
"batch_size": 32,
"epochs": 100,
"Routing_N": 3,
...
}'
or you can use the run-HT-CapsNet.sh
script to run the notebook with the specified parameters. The script will create a new notebook with the output of the training process.
bash run-HT-CapsNet.sh
The training process will save the model checkpoints and logs in the logs
directory. You can monitor the training process using TensorBoard by running the following command:
tensorboard --logdir logs
To evaluate the model, run the main.ipynb
notebook after training: providing 'Test_only':'BOOL_FLAG'
in the args_dict{}
dictionary. The notebook contains the code to evaluate the model on the test set. The evaluation process will save the evaluation metrics in the logs
directory. You can visualize the evaluation metrics using TensorBoard by running the following command:
tensorboard --logdir logs
We expose a seed
parameter in args_dict
. To reproduce the 3‑seed CIFAR‑100 check:
papermill main.ipynb logs/CIFAR-100/seed-101.ipynb -p args_dict '{"dataset":"CIFAR-100","epochs":200,"Routing_N":3,"seed":101}'
papermill main.ipynb logs/CIFAR-100/seed-202.ipynb -p args_dict '{"dataset":"CIFAR-100","epochs":200,"Routing_N":3,"seed":202}'
papermill main.ipynb logs/CIFAR-100/seed-303.ipynb -p args_dict '{"dataset":"CIFAR-100","epochs":200,"Routing_N":3,"seed":303}'
python aggregate.py CIFAR-100
If you find this code useful, please consider citing our paper:
@article{noorTaxonomyguided2025,
title = {Taxonomy-Guided Routing in Capsule Network for Hierarchical Image Classification},
author = {Noor, Khondaker Tasrif and Luo, Wei and Robles-Kelly, Antonio and Zhang, Leo Yu and Bouadjenek, Mohamed Reda},
date = {2025-11-04},
journaltitle = {Knowledge-Based Systems},
shortjournal = {Knowledge-Based Systems},
volume = {329},
pages = {114444},
issn = {0950-7051},
doi = {10.1016/j.knosys.2025.114444},
langid = {british}
}
or
K. T. Noor, W. Luo, A. Robles-Kelly, L. Y. Zhang, and M. R. Bouadjenek, “Taxonomy-guided routing in capsule network for hierarchical image classification,” Knowledge-Based Systems, vol. 329, p. 114444, Nov. 2025, doi: 10.1016/j.knosys.2025.114444.
This project is licensed under the MIT License. See the LICENSE file for details.