xet-core enables huggingface_hub to utilize xet storage for uploading and downloading to HF Hub. Xet storage provides chunk-based deduplication, efficient storage/retrieval with local disk caching, and backwards compatibility with Git LFS. This library is not meant to be used directly, and is instead intended to be used from huggingface_hub.
♻ chunk-based deduplication implementation: avoid transferring and storing chunks that are shared across binary files (models, datasets, etc).
🤗 Python bindings: bindings for huggingface_hub package.
↔ network communications: concurrent communication to HF Hub Xet backend services (CAS).
🔖 local disk caching: chunk-based cache that sits alongside the existing huggingface_hub disk cache.
Please join us in making xet-core better. We value everyone's contributions. Code is not the only way to help. Answering questions, helping each other, improving documentation, filing issues all help immensely. If you are interested in contributing (please do!), check out the contribution guide for this repository.
If you encounter an issue when using hf-xet
please help us fix the issue by collecting diagnostic information and attaching that when creating a new Issue. Download the hf-xet-diag-linux.sh or hf-xet-diag-windows.sh script based on your operating system and then re-run the python command that resulted in the issue. The diagnostic scripts will download and install debug symbols, setup up logging, and take periodic stack traces throughout process execution in a diagnostics directory that is easy to analyze, package, and upload.
- Uses
gdb
+gcore
to periodically snapshot stacks and produce core dumps. - Supports optional ptrace preload helper for debugging.
- Downloads and installs the appropriate
hf_xet-*.dbg
symbol file automatically.
Requirements:
sudo apt-get install gdb build-essential
Example usage:
./hf-xet-diag-linux.sh -- python hf-download.py "Qwen/Qwen2.5-VL-3B-Instruct"
- Runs in Git-Bash, keeping usage consistent with Linux.
- Uses Sysinternals ProcDump for periodic mini dumps (
-mp
). - Auto-downloads
procdump.exe
if not found. - Downloads and installs the matching
hf_xet.pdb
debug symbol into the package directory.
Requirements:
- Git-Bash (from Git for Windows)
- Python installed
- Internet access (first run downloads ProcDump and debug symbols)
Example usage:
./hf-xet-diag-windows.sh -- python hf-download.py "Qwen/Qwen2.5-VL-3B-Instruct"
Both scripts produce a diagnostics directory named:
diag_<command>_<timestamp>/
├── console.log # Combined stdout/stderr of the process
├── env.log # System/environment info
├── pid # Child PID file
├── stacks/ # Periodic stack traces / dumps
└── dumps/ # (Linux only) full gcore dumps
This unified layout makes it easier to compare diagnostics across platforms.
From your repo root:
./analyze-latest.sh
-
Finds the most recent
diag_*
directory. -
Opens the latest dump inside:
- Linux: opens
dumps/core_*
ingdb
. - Windows (Git-Bash): opens
stacks/*.dmp
in WinDbg (windbg
must be on PATH).
- Linux: opens
-
You can also pass a base directory if your diagnostics are stored elsewhere:
./analyze-latest.sh /path/to/diagnostics
Linux
-
Stack traces are saved under
stacks/
as plain text. -
Core dumps (
dumps/core_*
) can be analyzed with gdb:gdb python dumps/core_<pid> (gdb) bt # backtrace (gdb) thread apply all bt
-
Ensure the matching debug symbols (
hf_xet-*.dbg
) are in thehf_xet
package directory.
Windows
-
Dumps are saved under
stacks/
as.dmp
files. -
Open
.dmp
files in WinDbg (install via Windows SDK):windbg -z dump_20250101_120000.dmp
-
Common WinDbg commands:
!analyze -v # Automatic analysis ~* kb # Show stack traces for all threads lm # List loaded modules (verify hf_xet.pdb loaded)
-
Ensure
hf_xet.pdb
is installed in thehf_xet
package directory so symbols load correctly.
diag_<command>_<timestamp>/
directory when reporting issues — it contains logs, environment info, and dumps needed to reproduce and diagnose problems.
To limit the size our our built binaries, we are releasing python wheels with binaries that are stripped of debugging symbols. If you encounter a panic while running hf-xet, you can use the debug symbols to help identify the part of the library that failed.
Here are the recommended steps:
- Download and unzip our debug symbols package.
- Determine the location of the hf-xet package using
pip show hf-xet
. TheLocation
field will show the location of all the site packages. Thehf_xet
package will be within that directory. - Determine the symbols to copy based on the system you are running:
- Windows: use
hf_xet.pdb
- Mac: use
libhf_xet-macosx-x86_64.dylib.dSYM
for Intel based Macs andlibhf_xet-macosx-aarch64.dylib.dSYM
for Apple Silicon. - Linux: the choice will depend on the architecture and wheel distribution used. To get this information,
cat
theWHEEL
file name within thehf_xet.dist-info
directory in your site packages. The wheel file will have the linux build and architecture in the file name. Eg:cat /home/ubuntu/.venv/lib/python3.12/site-packages/hf_xet-*.dist-info/WHEEL
. You will use the file namedhf_xet-<manylinux | musllinux>-<x86_64 | arm64>.abi3.so.dbg
choosing the distribution and platform that matches your wheel. Eg:hf_xet-manylinux-x86_64.abi3.so.dbg
.
- Windows: use
- Copy the symbols to the site package path from step 2 above +
hf_xet
. Eg:cp -r hf_xet-1.1.2-manylinux-x86_64.abi3.so.dbg /home/ubuntu/.venv/lib/python3.12/site-packages/hf_xet
- Run your python binary with
RUST_BACKTRACE=full
and recreate your failure.
To enable logging and see more debugging / diagnostics information, set the following:
RUST_BACKTRACE=full
RUST_LOG=info
HF_XET_LOG_FILE=/tmp/xet.log
Note: HF_XET_LOG_FILE expects a full writable path. If one isn't found it will use stdout console for logging.
- cas_client: communication with CAS backend services, which include APIs for Xorbs and Shards.
- cas_object: CAS object (Xorb) format and associated APIs, including chunks (ranges within Xorbs).
- cas_types: common types shared across crates in xet-core and xetcas.
- chunk_cache: local disk cache of Xorb chunks.
- chunk_cache_bench: benchmarking crate for chunk_cache.
- data: main driver for client operations - FilePointerTranslator drives hydrating or shrinking files, chunking + deduplication here.
- error_printer: utility for printing errors conveniently.
- file_utils: SafeFileCreator utility, used by chunk_cache.
- hf_xet: Python integration with Rust code, uses maturin to build
hf-xet
Python package. Main integration with HF Hub Python package. - mdb_shard: Shard operations, including Shard format, dedupe probing, benchmarks, and utilities.
- merklehash: MerkleHash type, 256-bit hash, widely used across many crates.
- progress_reporting: offers ReportedWriter so progress for Writer operations can be displayed.
- utils: general utilities, including singleflight, progress, serialization_utils and threadpool.
To build xet-core, look at requirements in GitHub Actions CI Workflow for the Rust toolchain to install. Follow Rust documentation for installing rustup and that version of the toolchain. Use the following steps for building, testing, benchmarking.
Many of us on the team use VSCode, so we have checked in some settings in the .vscode directory. Install the rust-analyzer extension.
Build:
cargo build
Test:
cargo test
Benchmark:
cargo bench
Linting:
cargo clippy -r --verbose -- -D warnings
Formatting (requires nightly toolchain):
cargo +nightly fmt --manifest-path ./Cargo.toml --all
- Create Python3 virtualenv:
python3 -mvenv ~/venv
- Activate virtualenv:
source ~/venv/bin/activate
- Install maturin:
pip3 install maturin ipython
- Go to hf_xet crate:
cd hf_xet
- Build:
maturin develop
- Test:
ipython
import hf_xet as hfxet
hfxet.upload_files()
hfxet.download_files()
Prerequisite is installing tokio-console (
cargo install tokio-console
). See https://github.com/tokio-rs/console
To use tokio-console with hf-xet there are compile hf_xet with the following command:
RUSTFLAGS="--cfg tokio_unstable" maturin develop -r --features tokio-console
Then while hf_xet is running (via a hf
cli command or huggingface_hub
python code), tokio-console
will be able to connect.
# In one terminal:
pip install huggingface_hub
RUSTFLAGS="--cfg tokio_unstable" maturin develop -r --features tokio-console
hf download openai/gpt-oss-20b
# In another terminal
cargo install tokio-console
tokio-console
From hf_xet directory:
MACOSX_DEPLOYMENT_TARGET=10.9 maturin build --release --target universal2-apple-darwin --features openssl_vendored
Note: You may need to install x86_64: rustup target add x86_64-apple-darwin
Unit-tests are run with cargo test
, benchmarks are run with cargo bench
. Some crates have a main.rs that can be run for manual testing.
- Technical Blog posts
- Git is for Data 'CIDR paper
- History: xet-core is adapted from xet-core, which contains deep git integration, along with very different backend services implementation.