-
Notifications
You must be signed in to change notification settings - Fork 12
7. Application Performance Tunning (Rosetta Version: greater than equal 1.2.5)
This guide provides details on how to tune cardano-rosetta-java for various workloads and resource constraints. It covers:
- Pruning (Disk Usage Optimization)
- Database Pool Settings (HikariCP)
- Tomcat Thread Configuration
- Example
.envSettings
Pruning removes spent (consumed) UTXOs from local storage, keeping only unspent UTXOs. This can reduce on-disk storage from ~1TB down to ~400GB, but discards historical transaction data.
- Only unspent outputs are preserved.
- You can still validate the chain’s current state (and spend tokens), since active UTXOs remain.
-
Enable Pruning: Set
PRUNING_ENABLED=truein your environment (e.g., in.env.dockerfileor.env.docker-compose). -
Disable Pruning (default): Set
PRUNING_ENABLED=false.
- Low Disk Environments: If you need to minimize disk usage and only require UTXO data for current balances.
- Exploratory / Dev Environments: If historical queries are not critical.
- Full Historical Data Requirements: If you need the complete transaction history—whether for exchange operations, audit trails, or compliance mandates—do not enable pruning. Pruning discards spent UTXOs, which removes older transaction data and prevents certain types of historical lookups or reporting.
cardano-rosetta-java uses HikariCP as the JDBC connection pool. Tuning these values can help manage concurrency and performance.
| Configuration ID | Memory (GB) | CPU Cores | Storage Type | Virtualized | Min Pool Size | Max Pool Size | Max Lifetime (ms) | Connection Timeout (ms) | Keepalive Time (ms) | Leak Detection Threshold (ms) |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | (24+4)=28 | 2 | SSD | Yes | 5 | 10 | 300000 | 30000 | 60000 | 60000 |
| 2 | (24+4)=28 | 4 | SSD | Yes | 5 | 15 | 300000 | 30000 | 60000 | 60000 |
| 3 | (24+8)=32 | 4 | NVMe | No | 15 | 40 | 600000 | 30000 | 60000 | 60000 |
| 4 | (24+16)=40 | 8 | SSD | Yes | 20 | 60 | 600000 | 30000 | 60000 | 60000 |
| 5 | (24+16)=40 | 8 | NVMe | Yes | 30 | 80 | 600000 | 30000 | 60000 | 60000 |
| 6 | (24+16)=40 | 16 | SSD | No | 40 | 120 | 600000 | 30000 | 60000 | 60000 |
| 7 | (24+16)=40 | 16 | NVMe | No | 50 | 150 | 600000 | 30000 | 60000 | 60000 |
| 8 | (24+32)=56 | 16 | NVMe | No | 60 | 200 | 600000 | 30000 | 60000 | 60000 |
| 9 | (24+40)=64 | 16 | NVMe | No | 60 | 200 | 600000 | 30000 | 60000 | 60000 |
-
Min Pool Size: Corresponds to
API_DB_MIN_POOL_SIZE -
Max Pool Size: Corresponds to
API_DB_MAX_POOL_SIZE -
Max Lifetime (ms): Corresponds to
API_DB_MAX_LIFETIME_MS -
Connection Timeout (ms): Corresponds to
API_DB_CONNECTION_TIMEOUT_MS -
Keepalive Time (ms): Corresponds to
API_DB_KEEP_ALIVE_MS -
Leak Detection Threshold (ms): Corresponds to
API_DB_LEAK_CONNECTIONS_WARNING_MS
- Virtualized vs. Non-Virtualized: virtualization can add overhead in terms of I/O performance, which impacts database connection pool sizing. Non-virtualized systems typically have more predictable and better I/O performance, enabling higher connection pool sizes.
- SSD vs. NVMe: NVMe storage provides faster read/write speeds compared to SSD, resulting in lower latency and higher throughput. Systems with NVMe can handle more connections and better perform in terms of I/O-bound operations.
- CPU and RAM: higher core and memory configurations allow for a larger connection pool to handle more concurrent operations. More resources can be allocated to the database connection pool to handle higher throughput.
-
API_DB_POOL_MIN_COUNTandAPI_DB_POOL_MAX_COUNT: these values are adjusted based on available resources, considering both CPU cores and available memory. The more cores and RAM, the larger the pool can be, especially in non-virtualized systems or systems with faster storage (NVMe). -
API_DB_POOL_MAX_LIFETIME_MS: connections are retained in the pool for a longer period on high-performance systems with large connection pools, improving overall efficiency. -
API_DB_POOL_CONNECTION_TIMEOUT_MS: this is the time the system will wait before timing out when requesting a connection from the pool. On systems with faster storage and more resources, this can be set higher for increased throughput.
This table should give you a clear view of how the database connection pool tuning should be adjusted based on different hardware configurations, including the impact of storage type and virtualization.
Below is a snippet of how you might configure .env.dockerfile or .env.docker-compose for higher throughput:
# --- Pruning Toggle ---
PRUNING_ENABLED=false
# Keep full history, requires ~1TB of disk space
# --- HikariCP Database Pool ---
DB_POOL_MIN_COUNT=20
DB_POOL_MAX_COUNT=100(Caution) Notice that setting high values on low hardware specs may slow down operations, please refer to the table above.
- Rosetta API Reference
- Yaci-Store Repository
- Spring Boot Docs (for more advanced server and DB config)