Skip to content

conree/Raspberry-Pi5-NAS-Backup

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

15 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Raspberry Pi 5 NAS Backup System

🎯 Overview

High-performance automated backup solution for Pi 5 NAS with SnapRAID protection, Google Drive integration, and intelligent storage management.

πŸ—οΈ Hardware Architecture

  • Platform: Raspberry Pi 5 (8GB RAM)
  • Storage Controller: Radaxa Penta HAT (5-bay SATA)
  • Primary Storage: 4x 2TB SATA SSDs in SnapRAID array
  • Boot Drive: NVMe SSD via USB 3.0 adapter
  • Network: Gigabit Ethernet connection

πŸ“Š Storage Configuration

SnapRAID Array Layout

Drive Assignment Example:

β”œβ”€β”€ sda (2TB) β†’ Data Drive 1 (Movies/TV Shows)
β”œβ”€β”€ sdb (2TB) β†’ Data Drive 2 (Music/Audio)
β”œβ”€β”€ sdc (2TB) β†’ Data Drive 3 (Photos/Documents) - **100% FULL in this example**
β”œβ”€β”€ sdd (2TB) β†’ Data Drive 4 (General Storage) - **Used for backups (1.8T free)**
└── sde (2TB) β†’ Parity Drive (Protection)

πŸ“ Note: In this configuration, sdc is 100% full, so the space management system automatically routes backups to sdd which has available space. Adapt drive assignments to match your actual hardware and usage patterns.

Total Capacity: 6TB usable + 2TB parity protection


### Mount Points
```bash
/srv/dev-disk-by-uuid-aaaaaaaa-bbbb-cccc-dddd-111111111111  # Movies
/srv/dev-disk-by-uuid-bbbbbbbb-cccc-dddd-eeee-222222222222  # Music  
/srv/dev-disk-by-uuid-cccccccc-dddd-eeee-ffff-333333333333  # Photos
/srv/dev-disk-by-uuid-dddddddd-eeee-ffff-aaaa-444444444444  # Parity

πŸ”„ Backup Strategy

Three-Tier Protection

  1. Local Redundancy: SnapRAID parity protection
  2. Cloud Backup: Google Drive sync via rclone
  3. System Backup: Configuration and Docker volumes

Backup Schedule

# Photo Collection Backup
Schedule: Daily at 4:00 AM
Bandwidth: 50 MB/s during off-peak
Method: Incremental sync with verification

# System Configuration Backup  
Schedule: Daily at 4:30 AM
Bandwidth: 10 MB/s
Includes: OMV config, Docker volumes, system info

# SnapRAID Maintenance
Schedule: Weekly on Sunday 3:00 AM
Operation: Scrub 12% of array for integrity

🐳 Container Services

Media Stack

  • Plex Media Server: Port 32400
  • Immich Photo Management: Port 2283
  • Portainer Management: Port 9000

Storage Services

  • OpenMediaVault: Port 80
  • SnapRAID: Automated parity and scrubbing
  • rclone: Cloud backup synchronization

βš™οΈ Advanced Features

Intelligent Space Management

The system includes automated space management designed for multi-drive SnapRAID arrays:

Multi-Drive Intelligence:

  • Handles 100% full drives (like sdc1 in your array)
  • Routes backups to available drives automatically
  • SnapRAID-aware space calculations
  • Google Drive quota integration

Space Management Commands:

# Pre-backup space validation
/usr/local/bin/configured/raspberry-pi-nas-backup/scripts/backup/pi-nas-space-manager.sh check

# Smart cleanup of old logs and failed backups
/usr/local/bin/configured/raspberry-pi-nas-backup/scripts/backup/pi-nas-space-manager.sh cleanup

# Show comprehensive backup inventory
/usr/local/bin/configured/raspberry-pi-nas-backup/scripts/backup/pi-nas-space-manager.sh inventory

# Monitor Google Drive quota and usage
/usr/local/bin/configured/raspberry-pi-nas-backup/scripts/backup/pi-nas-space-manager.sh quota

Prevents backup failures even when individual drives are 100% full! πŸ›‘οΈ

# Automated cleanup policies
PHOTO_RETENTION_DAYS=365        # Keep photos 1 year locally
MEDIA_CACHE_SIZE=500GB         # Plex transcode cache limit
LOG_RETENTION_DAYS=30          # System log retention
DOCKER_IMAGE_CLEANUP=weekly    # Prune unused images

# Quota management
BACKUP_QUOTA_WARNING=80%       # Warn at 80% cloud storage
DRIVE_SPACE_WARNING=85%        # Warn at 85% drive usage

Performance Optimization

  • Network Tuning: Optimized TCP buffers for Gigabit
  • Storage Tuning: noatime, optimized ext4 parameters
  • Memory Management: Reduced swappiness, optimized cache
  • Thermal Management: Active cooling with monitoring

πŸ›‘οΈ Data Protection

Multi-Layer Security

  1. Physical: Secure location, controlled access
  2. Network: UFW firewall, SSH key authentication
  3. Storage: SnapRAID parity, encrypted cloud backup
  4. Access: Role-based permissions, audit logging

Recovery Capabilities

# SnapRAID recovery (single drive failure)
snapraid fix -d [failed_drive]

# Cloud restore (selective)
rclone copy gdrive:NAS_Backup/PhotoCollection /local/restore/

# System restore (complete)
./disaster-recovery.sh --full-restore

πŸ“Š Performance Metrics

Storage Performance

  • Sequential Read: 400-500 MB/s per drive
  • Random I/O: 50,000-70,000 IOPS per drive
  • Array Throughput: 1.2-1.5 GB/s aggregate
  • Parity Calculation: 200-300 MB/s

Network Performance

  • LAN Transfer: 900+ Mbps sustained
  • Plex Streaming: 4K transcoding capable
  • Backup Upload: 50-100 Mbps (configurable)
  • Web Interface: Sub-second response times

πŸ”§ Maintenance Commands

Daily Operations

# System health check
~/scripts/health_check.sh

# Backup status verification
~/scripts/backup_status.sh

# Network performance test
~/scripts/network_test.sh

# Temperature monitoring
vcgencmd measure_temp

Weekly Maintenance

# SnapRAID status and scrub
snapraid status
snapraid scrub -p 12

# Docker cleanup
docker system prune -f

# Log rotation
sudo logrotate -f /etc/logrotate.conf

🚨 Troubleshooting

Common Issues

  1. Drive Not Detected: Check SATA connections, power supply
  2. High Temperature: Verify cooling, check for dust buildup
  3. Network Issues: Test cable, check switch/router
  4. Backup Failures: Verify rclone config, check Google Drive quota

Emergency Contacts

  • Hardware Issues: Check hardware setup documentation
  • Software Issues: Review installation and configuration guides
  • Data Recovery: Follow disaster recovery procedures

πŸ“ˆ Monitoring Dashboard

Key Metrics Tracked

  • CPU temperature and throttling status
  • Drive health and SMART data
  • Network throughput and latency
  • Backup success rates and timing
  • Storage utilization and growth trends
  • Power consumption and efficiency

πŸ”„ Upgrade Path

Future Enhancements

  • Storage Expansion: Additional drive bays available
  • Network Upgrade: 2.5GbE or 10GbE capability
  • Compute Upgrade: Future Pi models compatibility
  • Software Stack: Container orchestration evolution

Last Updated: 2025-07-27 System Version: Pi 5 NAS v2.0

β”‚ β”‚ β”œβ”€β”€ NAS_Backup_Script.sh # Main backup automation β”‚ β”‚ └── backup-script-final.sh # Enhanced backup script β”‚ └── installation/ β”‚ β”œβ”€β”€ Raspberry_Pi_5_NAS_Automated_Installation_Script.sh β”‚ └── Immich_Environment_Configuration.sh β”œβ”€β”€ docs/ β”‚ β”œβ”€β”€ hardware-setup-doc.md # Hardware assembly guide β”‚ β”œβ”€β”€ software-installation-doc.md # Software setup guide β”‚ β”œβ”€β”€ nvme-cloning-doc.md # NVMe setup and cloning β”‚ └── Complete Pi 5 NAS Setup - README.md.pdf └── docker/ β”œβ”€β”€ plex/ β”‚ └── Plex_Docker_Compose_Configuration β”œβ”€β”€ immich/ β”‚ β”œβ”€β”€ Immich_Docker_Compose

About

This is a fully automated backup system with intelligent memory management for my Pi5_NAS.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages