A robust, self-hosted media streaming platform with automatic transcoding and adaptive bitrate delivery.
Explore the features »
Get Started
Wiki
Table of Contents
This project delivers a powerful, self-hosted media streaming solution, designed for seamless content delivery. It intelligently handles automatic transcoding of various media formats and provides adaptive bitrate delivery (HLS), ensuring a smooth and high-quality viewing experience across diverse devices and network conditions.
- Automatic Transcoding: Media files are automatically transcoded into multiple formats and resolutions, optimizing them for streaming.
- Adaptive Bitrate (ABR) Delivery: Utilizes HLS (HTTP Live Streaming) to dynamically adjust video quality based on the user's network speed, minimizing buffering and maximizing viewing pleasure.
- Scalable Architecture: Built with a microservices approach, leveraging message queues for efficient processing and scalability.
- Object Storage Integration: Stores transcoded media files in a high-performance, S3-compatible object storage solution.
- Modern User Interface: A responsive and intuitive web interface for managing and Browse your media library.
The platform follows a distributed microservices architecture:
- Ingestion Service (Go/Rust): Handles uploading and initial processing of media files. Publishes events to Kafka.
- Transcoding Service (Go/Rust with FFmpeg): Consumes events from Kafka, performs transcoding using FFmpeg, and stores transcoded segments in MinIO. Publishes completion events to Kafka.
- API Gateway/Streaming Service (Go): Provides APIs for the frontend and serves HLS manifests and segments directly from MinIO.
- Frontend (Next.js): Communicates with the API Gateway to display media, manage uploads, and initiate playback.
- Database Layer (PostgreSQL & Redis): PostgreSQL stores persistent data, while Redis handles caching and real-time data.
- Message Queues (Kafka & RabbitMQ): Facilitate communication and task distribution between services, ensuring high availability and scalability.
To get a local copy of this streaming platform up and running, follow these simple steps.
- Docker and Docker Compose (essential for running all services)
- Clone the project repository:
cd codek7-backend
- **Build the vcodec service:
cd vcodec cargo build
- Start all services using Docker Compose:
This command will build (if necessary) and start all the services defined in
docker compose up
docker-compose.yml
file, including Kafka, RabbitMQ, PostgreSQL, Redis, MinIO, your custom Go/Rust backend services, and the Next.js frontend application.
- Once all services are up and running, you can access the frontend application in your web browser. Typically, it will be available at
http://localhost:3000
, but check yourdocker-compose.yml
or frontend configuration for the exact port. - Upload your media files through the user interface.
- The platform will automatically transcode and prepare your media for adaptive bitrate streaming.
- Browse your media library and enjoy a smooth viewing experience.
Below are various diagrams illustrating the system's architecture, data flow, and video processing states.
This diagram provides a high-level overview of the main components and their interactions.
This diagram details the sequence of operations from video upload to final availability, including transcoding and NSFW checking.
This sequence diagram illustrates the communication flow between different services during a video upload and processing cycle.
This entity-relationship diagram (ERD) showcases the database schema and relationships between different entities in the system.
- User authentication and authorization
- Advanced media library management (categories, tags)
- Search and filtering capabilities
- Playback history and watch progress tracking
- API for external applications
- Multi-user support with custom profiles
- Live streaming capabilities
Distributed under the MIT License. See LICENSE
for more information.