Skip to content
/ Unifews Public

The original code for Unifews: You Need Fewer Operations for Efficient Graph Neural Networks

Notifications You must be signed in to change notification settings

gdmnl/Unifews

Repository files navigation

Unifews

This is the original code for Unifews: Unified Entry-Wise Sparsification for Efficient Graph Neural Network

Dependencies

Python

Installed env.txt by conda:

conda create --name <env> --file env.txt

C++

Experiment

Data Preparation

  1. Use utils/data_transfer.py to generate processed files under path data/[dataset_name] similar to the example folder data/cora:
  • adj.npz: adjacency matrix in scipy.sparse.csr_matrix
  • feats.npy: features in .npy array
  • labels.npz: node label information
    • 'label': labels (number or one-hot)
    • 'idx_train/idx_val/idx_test': indices of training/validation/test nodes
  • adj_el.bin, adj_pl.bin, attribute.txt, degree.npz: graph files for precomputation

Decoupled Model Propagation

  1. Compile Cython:
cd precompute
python setup.py build_ext --inplace

Model Training

  1. Run full-batch experiment:
python run_fb.py -f [seed] -c [config_file] -v [device]
  1. Run mini-batch experiment
python run_mb.py -f [seed] -c [config_file] -v [device]

Reference & Links

Datasets

Baselines

  • GLT: A Unified Lottery Ticket Hypothesis for Graph Neural Networks
  • GEBT: Early-Bird GCNs: Graph-Network Co-optimization towards More Efficient GCN Training and Inference via Drawing Early-Bird Lottery Tickets
  • CGP: Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks
  • DSpar: DSpar: An Embarrassingly Simple Strategy for Efficient GNN Training and Inference via Degree-Based Sparsification
  • NDLS: Node Dependent Local Smoothing for Scalable Graph Learning
  • NIGCN: Node-wise Diffusion for Scalable Graph Learning

About

The original code for Unifews: You Need Fewer Operations for Efficient Graph Neural Networks

Resources

Stars

Watchers

Forks