-
Notifications
You must be signed in to change notification settings - Fork 2
docs: add comprehensive hardware requirements documentation for CSPs #780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Add detailed hardware requirements documentation targeting Cloud Service Providers and infrastructure architects planning bud-stack deployments. Key features: - Aggregate infrastructure requirements (CPU, memory, storage, network) - Three deployment profiles: Dev/Test, Staging, Production - Cloud-specific configurations for Azure AKS, AWS EKS, and on-premises - Node pool breakdown for production with specialized workload separation - Storage performance requirements with IOPS and latency specifications - Network bandwidth and latency requirements - High availability and disaster recovery guidance - Cost estimates and optimization strategies - Quick reference sizing cheat sheet Focus on CSP-level infrastructure planning without service-specific details. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Summary of ChangesHello @dittops, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces new, comprehensive documentation detailing the hardware requirements for deploying the Bud-Stack platform. The document is tailored for Cloud Service Providers and infrastructure architects, offering clear guidelines for capacity planning across various deployment scales, from development to large-scale production, and includes cloud-specific considerations and operational best practices. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a comprehensive hardware requirements document, which is a valuable addition for users planning deployments. The document is well-structured and detailed. However, my review identified several critical inconsistencies and calculation errors in the resource specifications. The production requirements summary does not align with the detailed breakdown, and some of the 'typical configuration' examples are misleading as they don't match the stated requirements. There are also minor inconsistencies in software version numbers. I've provided specific comments and suggestions to address these issues to ensure the document is accurate and clear for capacity planning.
| | Resource | Requirement | | ||
| |----------|-------------| | ||
| | **CPU Cores** | 120-200 cores | | ||
| | **Memory (RAM)** | 250-500 GiB | | ||
| | **Storage (SSD)** | 2-5 TiB | | ||
| | **Network Bandwidth** | 10-40 Gbps | | ||
| | **Operating System** | Linux (Ubuntu 22.04+, RHEL 8+, or OpenShift 4.12+) | | ||
| | **Kubernetes** | Version 1.29+ | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The resource requirements in this summary table are inconsistent with the totals derived from the 'Detailed Production Architecture' section below. For example, you specify 120-200 CPU cores here, but the detailed breakdown sums to 168-304 vCPU. Similar discrepancies exist for RAM and Storage. To avoid confusion for capacity planning, this summary table should be updated to accurately reflect the totals from the detailed breakdown (after correcting the calculation errors in that section).
docs/HARDWARE_REQUIREMENTS.md
Outdated
| | **Gateway** | API gateway, ingress | 8 vCPU, 16GB RAM, 100GB SSD | 2-3 | 16-24 vCPU, 32-48GB RAM | | ||
|
|
||
|
|
||
| **Total Production Resources**: 168-304 vCPU, 480-848GB RAM, 3-6TB storage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The total storage calculation for production resources appears incorrect. Based on the 'Node Pool Breakdown' table, the storage range should be 5.7-9.8 TB, not 3-6 TB.
Calculation:
- Min: (3 * 500GB) + (5 * 200GB) + (3 * 1TB) + (2 * 100GB) = 5.7 TB
- Max: (5 * 500GB) + (10 * 200GB) + (5 * 1TB) + (3 * 100GB) = 9.8 TB
Please update this line to reflect the correct total.
| **Total Production Resources**: 168-304 vCPU, 480-848GB RAM, 3-6TB storage | |
| **Total Production Resources**: 168-304 vCPU, 480-848GB RAM, 5.7-9.8TB storage |
docs/HARDWARE_REQUIREMENTS.md
Outdated
| | **Operating System** | Linux (Ubuntu 22.04+, RHEL 8+, or OpenShift 4.12+) | | ||
| | **Kubernetes** | Version 1.29+ | | ||
|
|
||
| **Typical Configuration**: 3 nodes × (8 vCPU, 16GB RAM, 100GB SSD) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 'Typical Configuration' does not meet the minimum requirements listed in the table above.
- CPU:
3 nodes × 8 vCPU = 24 vCPU, which is less than the required32 cores. - RAM:
3 nodes × 16GB RAM = 48GB RAM, which is less than the required64 GiB.
This is misleading for users setting up a development environment. Please adjust the typical configuration to meet or exceed the minimums. For example, you could use 4 nodes.
| **Typical Configuration**: 3 nodes × (8 vCPU, 16GB RAM, 100GB SSD) | |
| **Typical Configuration**: 4 nodes × (8 vCPU, 16GB RAM, 100GB SSD) |
docs/HARDWARE_REQUIREMENTS.md
Outdated
| | **Memory (RAM)** | 80-120 GiB | | ||
| | **Storage (SSD)** | 500-1,000 GiB | | ||
| | **Network Bandwidth** | 5-10 Gbps | | ||
| | **Operating System** | Linux (Ubuntu 20.04+, RHEL 8+, or OpenShift 4.12+) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Ubuntu version for 'Recommended Requirements' is listed as 20.04+, which is older than the 22.04+ requirement for 'Minimum' and 'Production' environments. For consistency, please update this to 22.04+.
| | **Operating System** | Linux (Ubuntu 20.04+, RHEL 8+, or OpenShift 4.12+) | | |
| | **Operating System** | Linux (Ubuntu 22.04+, RHEL 8+, or OpenShift 4.12+) | |
docs/HARDWARE_REQUIREMENTS.md
Outdated
| | **Storage (SSD)** | 500-1,000 GiB | | ||
| | **Network Bandwidth** | 5-10 Gbps | | ||
| | **Operating System** | Linux (Ubuntu 20.04+, RHEL 8+, or OpenShift 4.12+) | | ||
| | **Kubernetes** | Version 1.25+ | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Kubernetes version for 'Recommended Requirements' is listed as 1.25+, which is inconsistent with the 1.29+ requirement for 'Minimum' and 'Production' environments. To ensure consistency across the document, please update this to 1.29+.
| | **Kubernetes** | Version 1.25+ | | |
| | **Kubernetes** | Version 1.29+ | |
| | **Backups** | - | 500 GiB-1 TiB | Standard/Archive | | ||
|
|
||
| **Total Storage**: | ||
| - **Minimum**: 256 GiB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docs/HARDWARE_REQUIREMENTS.md
Outdated
| | **Operating System** | Linux (Ubuntu 20.04+, RHEL 8+, or OpenShift 4.12+) | | ||
| | **Kubernetes** | Version 1.25+ | | ||
|
|
||
| **Typical Configuration**: 5-7 nodes × (16 vCPU, 32GB RAM, 200GB SSD) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The upper bound of the 'Typical Configuration' (7 nodes) significantly exceeds the ranges specified in the 'Recommended Requirements' table. For instance, a 7-node cluster provides 112 vCPU (vs. 60-80 recommended) and 1.4 TB storage (vs. 500-1000 GiB recommended). This is confusing. Please revise the typical configuration to align better with the recommended ranges.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
docs/HARDWARE_REQUIREMENTS.md
Outdated
| | **Gateway** | API gateway, ingress | 8 vCPU, 16GB RAM, 100GB SSD | 2-3 | 16-24 vCPU, 32-48GB RAM | | ||
|
|
||
|
|
||
| **Total Production Resources**: 168-304 vCPU, 480-848GB RAM, 3-6TB storage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix production storage totals to match node pool sums
The totals listed after the node pool table claim 3-6TB of storage, but summing the node specs just above gives roughly 5.7-9.8TB (e.g., 3–5 × 500GB control plane + 5–10 × 200GB application + 3–5 × 1TB data + 2–3 × 100GB gateway). The incorrect figure would understate disk requirements by more than 40%, which can lead infrastructure planners to severely under‑provision storage for production deployments.
Useful? React with 👍 / 👎.
Summary
Add comprehensive hardware requirements documentation targeting Cloud Service Providers (CSPs) and infrastructure architects planning bud-stack deployments.
This documentation provides aggregate infrastructure requirements without service-specific microservice details, focusing on what CSPs need for capacity planning.
Key Features
✅ Infrastructure Requirements Summary
✅ Cloud-Specific Configurations
✅ Production Architecture
✅ Storage & Network Requirements
✅ Operational Guidance
Changes
docs/HARDWARE_REQUIREMENTS.mdwith CSP-focused infrastructure requirementsTarget Audience
Test Plan
🤖 Generated with Claude Code