English | ä¸ć–‡
A thread-safe, high-performance LRU cache built on top of DashMap. DashLru provides multiple eviction policies, TTL support, weight-based eviction, and comprehensive statistics tracking.
- Thread-Safe: Built on DashMap for high-performance concurrent access
- Multiple Eviction Policies:
- Standard LRU (Least Recently Used)
- LFU (Least Frequently Used)
- Weight-based LRU
- Hybrid strategy combining time, frequency, and weight
- TTL Support: Individual and default time-to-live for entries with automatic cleanup
- Weight-Based Eviction: Entries can have custom weights for memory-aware caching
- Comprehensive Statistics: Hit rates, miss rates, eviction counts, and more
- Configurable Limits: Size limits, weight limits, and cleanup intervals
- Zero-Copy Operations: Clones values only when necessary
Add this to your Cargo.toml:
[dependencies]
dash_lru = "0.1"use dash_lru::DashLru;
// Create a simple LRU cache with max 1000 entries
let cache = DashLru::lru(1000);
// Insert some values
cache.insert("key1", "value1", None, None);
cache.insert("key2", "value2", None, None);
// Retrieve values
assert_eq!(cache.get(&"key1"), Some("value1"));
assert_eq!(cache.get(&"missing"), None);
// Check cache statistics
let stats = cache.stats().unwrap();
println!("Hit rate: {:.2}%", stats.hit_rate() * 100.0);use dash_lru::DashLru;
use std::time::Duration;
// Create cache with 5-minute default TTL
let cache = DashLru::with_ttl(1000, Duration::from_secs(300));
// Insert with default TTL
cache.insert("temp_key", "temp_value", None, None);
// Insert with custom TTL (30 seconds)
cache.insert("short_key", "short_value", None, Some(Duration::from_secs(30)));
// Entries will automatically expire and be cleaned upuse dash_lru::DashLru;
// Create cache with max total weight of 1000
let cache = DashLru::weighted_lru(1000);
// Insert entries with different weights
cache.insert("light_data", vec![1, 2, 3], Some(10), None); // Weight: 10
cache.insert("heavy_data", vec![1; 1000], Some(100), None); // Weight: 100
// Heavy entries are more likely to be evicted when limits are exceededuse dash_lru::{DashLru, DashLruConfig, EvictionPolicy};
use std::time::Duration;
let config = DashLruConfig {
max_size: Some(10000), // Max 10k entries
max_weight: Some(1_000_000), // Max 1MB total weight
default_ttl: Some(Duration::from_secs(3600)), // 1 hour default TTL
eviction_policy: EvictionPolicy::Hybrid, // Hybrid eviction strategy
enable_stats: true, // Enable statistics
cleanup_interval: Some(Duration::from_secs(60)), // Cleanup every minute
weight_factor: 2.0, // Weight importance multiplier
time_decay_factor: 0.1, // Time decay for hybrid policy
};
let cache: DashLru<String, Vec<u8>> = DashLru::new(config);Standard LRU eviction - removes the least recently accessed entries first.
let cache = DashLru::lru(100);Removes entries with the lowest access count first.
use dash_lru::{DashLru, DashLruConfig, EvictionPolicy};
let config = DashLruConfig {
max_size: Some(100),
eviction_policy: EvictionPolicy::Lfu,
..Default::default()
};
let cache = DashLru::new(config);Considers both access recency and entry weight, favoring eviction of heavier entries that haven't been accessed recently.
let cache = DashLru::weighted_lru(1000); // Max weight: 1000Combines access time, frequency, and weight using a scoring function to make optimal eviction decisions.
use dash_lru::{DashLru, DashLruConfig, EvictionPolicy};
let config = DashLruConfig {
eviction_policy: EvictionPolicy::Hybrid,
time_decay_factor: 0.1, // How much recent access matters
weight_factor: 1.5, // How much weight matters
..Default::default()
};
let cache = DashLru::new(config);DashLru provides comprehensive statistics to help you monitor cache performance:
let cache = DashLru::lru(1000);
// Perform some operations...
cache.insert("key1", "value1", None, None);
cache.get(&"key1"); // hit
cache.get(&"key2"); // miss
// Get detailed statistics
if let Some(stats) = cache.stats() {
println!("Cache Statistics:");
println!(" Hits: {}", stats.hits);
println!(" Misses: {}", stats.misses);
println!(" Hit Rate: {:.2}%", stats.hit_rate() * 100.0);
println!(" Current Size: {}", stats.current_size);
println!(" Current Weight: {}", stats.current_weight);
println!(" Total Evicted: {}", stats.eviction_stats.total_evicted);
println!(" TTL Evicted: {}", stats.eviction_stats.ttl_evicted);
}Expired entries are automatically cleaned up at configurable intervals:
use dash_lru::{DashLru, DashLruConfig};
use std::time::Duration;
let config = DashLruConfig {
cleanup_interval: Some(Duration::from_secs(30)), // Clean every 30 seconds
..Default::default()
};
let cache = DashLru::new(config);You can also manually trigger cleanup of expired entries:
let cache = DashLru::with_ttl(1000, Duration::from_secs(60));
// ... time passes ...
let removed_count = cache.cleanup_expired();
println!("Removed {} expired entries", removed_count);Track memory usage with weight-based limits:
let cache = DashLru::weighted_lru(1000);
cache.insert("data1", vec![0u8; 400], Some(400), None);
cache.insert("data2", vec![0u8; 300], Some(300), None);
println!("Current weight: {}/1000", cache.current_weight());DashLru is fully thread-safe and designed for high-concurrency environments:
use std::sync::Arc;
use std::thread;
let cache = Arc::new(DashLru::lru(1000));
// Spawn multiple threads using the cache
let handles: Vec<_> = (0..10).map(|i| {
let cache = Arc::clone(&cache);
thread::spawn(move || {
cache.insert(format!("key{}", i), format!("value{}", i), None, None);
let value = cache.get(&format!("key{}", i));
println!("Thread {}: {:?}", i, value);
})
}).collect();
// Wait for all threads to complete
for handle in handles {
handle.join().unwrap();
}DashLru is designed for high performance in concurrent environments. Here are some key characteristics:
- Lock-free reads: Built on DashMap's concurrent hash table
- Minimal allocation: Reuses internal data structures where possible
- Efficient eviction: O(1) eviction candidate selection for most policies
- Batch operations: Automatic cleanup reduces per-operation overhead
DashLru is ideal for:
- Web application caching: Session data, API responses, computed results
- Database query caching: Expensive query results with TTL
- Image/asset caching: Large objects with weight-based eviction
- Rate limiting: TTL-based token buckets
- Memoization: Function result caching with LFU eviction
- CDN edge caching: Multi-tier caching with hybrid policies
insert(key, value, weight, ttl)- Insert or update an entryget(key)- Retrieve a value (updates access order)remove(key)- Remove an entrycontains_key(key)- Check existence without updating access orderclear()- Remove all entrieslen()- Current number of entriesis_empty()- Check if cache is emptycurrent_weight()- Current total weightstats()- Get performance statisticscleanup_expired()- Manually remove expired entries
DashLru::lru(max_size)- Standard LRU cacheDashLru::weighted_lru(max_weight)- Weight-based cacheDashLru::with_ttl(max_size, ttl)- TTL-enabled cache
This project is licensed under the MIT License - see the LICENSE file for details.