Configuration Management
Directory Structure
The tabi node configuration is stored in $HOME/.tabi/config/:
Plain
$HOME/.tabi/config/├── app.toml # Application configuration (gas fees, API settings, pruning, etc.)
├── client.toml # CLI and client-related settings
├── config.toml # Core Tendermint settings (network, consensus, and RPC)
├── genesis.json # Chain genesis file, defines initial state
├── node_key.json # Unique node identity key for peer-to-peer (P2P) networking
└── priv_validator_key.json # Validator private signing key (if running as a validator)Essential Configuration Parameters
Network Settings (config.toml)
Python
[p2p]
# Public IP for other nodes to reach you
external_address = "your-public-ip:26656"
# Local address to listen for incoming P2P connections
laddr = "tcp://0.0.0.0:26656"
# Number of peers allowed
max-num-inbound-peers = 40
max-num-outbound-peers = 20
# Network bandwidth limits to prevent congestion
send-rate = 20480000 # 20MB/s
recv-rate = 20480000 # 20MB/s
[rpc]
# RPC listen address
laddr = "tcp://0.0.0.0:26657"
# Maximum number of simultaneous connections
max-open-connections = 900
# Transaction confirmation timeout
timeout-broadcast-tx-commit = "10s"Application Settings (app.toml)
YAML
# Minimum gas prices [to prevent spam transactions]
minimum-gas-prices = "0.01utabi"
[api]
# Enable the API server
enable = true
max-open-connections = 1000
[state-commit]
# Use TabiDB for improved performance
sc-enable = true
[state-store]
# Enable state store for historical queries
ss-enable = true
# Retain 100,000 blocks for queryability
# 0 = "keep all"
ss-keep-recent = 100000Database Management
Database Types
Tabi supports two database backends:
TabiDB (Recommended)
- Optimized for performance and sync times
- Reduces resource usage
- Best for all nodes
Legacy IAVL DB
- Standard Cosmos SDK database
- More widely tested
Tabi-DB Configuration
Plain
[state-commit]
sc-enable = true
sc-async-commit-buffer = 100
sc-keep-recent = 1 # Keep only the most recent state for performance
sc-snapshot-interval = 10000 # Take state snapshots every 10,000 blocks
[state-store]
ss-enable = true
ss-backend = "pebbledb" # Default, required
ss-async-write-buffer = 100
ss-keep-recent = 100000 # Keep last 100,000 blocks
ss-prune-interval = 600 # Cleanup interval for pruningSetting very small [more frequent] pruning intervals may cause issues with automated snapshotting in the event those events collide. Too large [less frequent] pruning intervals means it will take a longer overall time to prune which may cause missed blocks and excessive resync time.