Documentation
Everything you need to install, configure, and use Pabawi to manage your infrastructure.
Quick Start
The fastest way to get Pabawi running is with the interactive setup script. It handles prerequisite checks, config file generation, dependency installation, and first launch.
$ git clone https://github.com/example42/pabawi $ cd pabawi $ ./scripts/setup.sh
The setup wizard will:
- Check for Node.js 20+, npm 9+, and optionally Bolt, Ansible, Puppet CLIs
- Generate
backend/.envwith smart defaults based on detected tools and SSL certs - Run
npm run install:all - Ask whether to start in dev mode, full-stack mode, or exit
Once running, open http://localhost:3000 in your browser.
Tip: You don't need Puppet, Bolt, or Ansible to get started. Enable SSH integration only and Pabawi works as a standalone web UI for your servers.
Prerequisites
Hard requirements (always needed):
- Node.js 20+ and npm 9+ — or use the Docker image which bundles everything
Soft requirements (only needed for specific integrations):
- Bolt CLI — for Bolt integration (task/command execution via Bolt)
- Ansible CLI — for Ansible integration
- Puppet/OpenVox agent — provides SSL certs for PuppetDB and Puppetserver integrations
- Control repo — for Hiera integration (local path to your Puppet control repository)
- SSH access — configured SSH keys/config for SSH integration
Manual Setup
If you prefer to configure things by hand:
$ git clone https://github.com/example42/pabawi && cd pabawi $ npm run install:all $ cp backend/.env.example backend/.env # Edit backend/.env with your settings # Development mode (two terminals): $ npm run dev:backend # port 3000 $ npm run dev:frontend # port 5173, proxies API to 3000 # Or build frontend and serve everything from backend: $ npm run dev:fullstack # port 3000
Docker Deployment
Pabawi ships as a single container image (example42/pabawi) supporting amd64 and arm64.
# Create a directory to hold all Pabawi data $ mkdir pabawi && cd pabawi # Create your .env (see environment variables section) $ vi .env # Run the container $ docker run -d \ --name pabawi \ --user "$(id -u):1001" \ -p 127.0.0.1:3000:3000 \ -v "$(pwd):/pabawi" \ --env-file ".env" \ example42/pabawi:latest
Mount points inside the container:
| Path | Purpose |
|---|---|
| /pabawi/data | SQLite database storage |
| /pabawi/certs | SSL certs for PuppetDB/Puppetserver |
| /pabawi/control-repo | Puppet control repo for Hiera |
| /pabawi/bolt-project | Bolt project directory |
Note: All paths in .env are relative to the container's filesystem, not your host.
Configuration
All configuration lives in backend/.env. Use backend/.env.example as a reference. Never commit .env to version control.
The ConfigService wraps all environment variables with Zod validation — missing required vars cause a clear startup error rather than a silent runtime failure.
Core Settings
| Variable | Default | Description |
|---|---|---|
| PORT | 3000 | HTTP server port |
| HOST | 0.0.0.0 | Bind address |
| LOG_LEVEL | info | debug / info / warn / error |
| DATABASE_PATH | ./data/pabawi.db | SQLite database file path |
| CONCURRENT_EXECUTION_LIMIT | 5 | Max parallel command executions |
| COMMAND_WHITELIST_ENABLED | true | Enable/disable command whitelist |
| COMMAND_WHITELIST | (see below) | Comma-separated list of allowed commands |
| CACHE_INVENTORY_TTL | 30 | Inventory cache TTL (seconds) |
| CACHE_FACTS_TTL | 300 | Facts cache TTL (seconds) |
Authentication
| Variable | Default | Description |
|---|---|---|
| AUTH_ENABLED | true | Enable JWT authentication |
| JWT_SECRET | — | Required. Secret key for JWT signing (min 32 chars) |
| JWT_EXPIRY | 24h | Token expiry duration |
| DEFAULT_ADMIN_USERNAME | admin | Initial admin account username |
| DEFAULT_ADMIN_PASSWORD | — | Initial admin password (change on first login) |
Security: Always set a strong JWT_SECRET. Run openssl rand -hex 32 to generate one. Never reuse this value across environments.
Integrations Overview
Pabawi uses a plugin architecture. Each integration is independent — disable what you don't need, and the rest continue working. The IntegrationManager fans out inventory and fact requests to all enabled plugins and merges results using a priority system.
| Plugin | Priority | Execution | Inventory | Provisioning |
|---|---|---|---|---|
| SSH | 50 (highest) | ✓ | ✓ | — |
| Puppetserver | 20 | — | ✓ | — |
| Bolt | 10 | ✓ | ✓ | — |
| PuppetDB | 10 | — | ✓ | — |
| Ansible | 8 | ✓ | ✓ | — |
| Hiera | 6 | — | — | — |
| Proxmox | — | — | ✓ | ✓ VMs & LXC |
| AWS | — | — | ✓ | ✓ EC2 |
Bolt Integration
Bolt provides command execution, task execution, and inventory discovery. Pabawi spawns the Bolt CLI directly — your existing Bolt project and inventory are used as-is.
| Variable | Description |
|---|---|
| BOLT_ENABLED | true / false |
| BOLT_PROJECT_PATH | Absolute path to your Bolt project directory |
| BOLT_COMMAND_TIMEOUT | Execution timeout in seconds (default: 120) |
| BOLT_INVENTORY_FILE | Path to inventory.yaml (default: auto-detected) |
Note: The Bolt user must have read access to the project directory and network access to target nodes. Bolt tasks are automatically discovered from the project's modules/ directory.
PuppetDB Integration
PuppetDB provides rich inventory, node facts, Puppet run reports, and event data. Supports both Puppet Enterprise and Open Source Puppet / OpenVox.
| Variable | Description |
|---|---|
| PUPPETDB_ENABLED | true / false |
| PUPPETDB_URL | PuppetDB URL (e.g. https://puppet.example.com:8081) |
| PUPPETDB_SSL_CERT | Path to client SSL certificate (.pem) |
| PUPPETDB_SSL_KEY | Path to client SSL private key (.pem) |
| PUPPETDB_SSL_CA | Path to CA certificate |
| PUPPETDB_TOKEN | PE RBAC token (alternative to SSL certs) |
Puppetserver Integration
Puppetserver integration provides node certificate listing, server status, and catalog compilation.
| Variable | Description |
|---|---|
| PUPPETSERVER_ENABLED | true / false |
| PUPPETSERVER_URL | Puppetserver URL (e.g. https://puppet.example.com:8140) |
| PUPPETSERVER_SSL_CERT | Path to client certificate |
| PUPPETSERVER_SSL_KEY | Path to client private key |
| PUPPETSERVER_SSL_CA | Path to CA certificate |
| PUPPETSERVER_ENVIRONMENT | Default environment (default: production) |
Hiera Integration
Browse and resolve Hiera data hierarchically, with fact-based interpolation and class-aware key analysis. Requires a local copy of your Puppet control repository.
| Variable | Description |
|---|---|
| HIERA_ENABLED | true / false |
| HIERA_CONTROL_REPO_PATH | Absolute path to your control repo |
| HIERA_ENVIRONMENTS | Comma-separated environment list (default: production) |
| HIERA_DEFAULT_ENVIRONMENT | Default environment for lookups |
Ansible Integration
Run ad-hoc commands, execute playbooks, and discover inventory via Ansible. Pabawi spawns the Ansible CLI using your existing project.
| Variable | Description |
|---|---|
| ANSIBLE_ENABLED | true / false |
| ANSIBLE_PROJECT_PATH | Absolute path to your Ansible project |
| ANSIBLE_INVENTORY_PATH | Path to inventory file or directory |
| ANSIBLE_COMMAND_TIMEOUT | Execution timeout in seconds (default: 120) |
| ANSIBLE_VAULT_PASSWORD_FILE | Path to vault password file (optional) |
SSH Integration
Direct SSH command execution — the simplest integration. Works standalone without any other tools. Ideal for homelabs or legacy nodes outside Puppet/Ansible.
| Variable | Description |
|---|---|
| SSH_ENABLED | true / false |
| SSH_CONFIG_PATH | Path to SSH config file (default: ~/.ssh/config) |
| SSH_DEFAULT_USER | Default SSH user for connections |
| SSH_DEFAULT_KEY_PATH | Default private key path |
| SSH_USE_SUDO | Prepend sudo to commands (default: false) |
| SSH_CONNECTION_TIMEOUT | Connection timeout in seconds (default: 10) |
| SSH_MAX_CONNECTIONS | Connection pool size (default: 10) |
| SSH_INVENTORY | Static host list (JSON array of host objects) |
Tip: Set SSH_INVENTORY to a JSON array to define hosts when no other inventory source is configured: SSH_INVENTORY=[{"name":"web01","host":"192.168.1.10"}]
Proxmox Integration v0.9+
Proxmox integration provides VM and LXC container inventory discovery, lifecycle management (start/stop/reboot/destroy), and provisioning — all alongside your existing config management tools.
Authentication
Token-based authentication is recommended. Create a dedicated API token in the Proxmox UI under Datacenter → Permissions → API Tokens with appropriate privileges.
| Variable | Description |
|---|---|
| PROXMOX_ENABLED | true / false |
| PROXMOX_HOST | Proxmox host (e.g. proxmox.example.com) |
| PROXMOX_PORT | API port (default: 8006) |
| PROXMOX_TOKEN | API token: user@realm!tokenid=uuid (recommended) |
| PROXMOX_USERNAME | Username for password auth (alternative to token) |
| PROXMOX_PASSWORD | Password for password auth |
| PROXMOX_REALM | Auth realm (default: pam) |
| PROXMOX_SSL_VERIFY | Verify SSL certificate (default: true) |
| PROXMOX_TIMEOUT | API timeout in ms (default: 30000) |
Capabilities
- Inventory — auto-discover all VMs and containers; group by node, status (running/stopped), or type (QEMU/LXC)
- Lifecycle — start, stop, shutdown, reboot, suspend (VMs), resume, destroy — with confirmation dialogs for destructive actions
- Provisioning — create new VMs and LXC containers with full parameter control via the Manage tab
- Facts — CPU, memory, disk, network config, status, uptime
Node identifiers follow the format proxmox:<node>:<vmid> (e.g. proxmox:pve1:101).
IAM note: The Proxmox API token must have at minimum VM.PowerMgmt, VM.Audit, and VM.Allocate privileges on the relevant resource pools.
AWS EC2 Integration v0.10+
AWS integration discovers EC2 instances across one or more regions, manages their lifecycle, and provisions new instances — making cloud VMs first-class citizens alongside your on-prem fleet.
| Variable | Description |
|---|---|
| AWS_ENABLED | true / false |
| AWS_ACCESS_KEY_ID | IAM access key ID |
| AWS_SECRET_ACCESS_KEY | IAM secret access key |
| AWS_DEFAULT_REGION | Default region (e.g. us-east-1) |
| AWS_REGIONS | JSON array of regions to discover (e.g. ["us-east-1","eu-west-1"]) |
| AWS_PROFILE | AWS CLI profile name (alternative to access keys) |
| AWS_SESSION_TOKEN | STS session token for temporary credentials |
Required IAM Permissions
# Minimum IAM permissions required
ec2:DescribeInstances, DescribeRegions, DescribeInstanceTypes
ec2:DescribeImages, DescribeVpcs, DescribeSubnets
ec2:DescribeSecurityGroups, DescribeKeyPairs
ec2:RunInstances, StartInstances, StopInstances
ec2:RebootInstances, TerminateInstances
sts:GetCallerIdentity
Capabilities
- Inventory — discover EC2 instances across all configured regions; group by region, VPC, or tags
- Lifecycle — start, stop, reboot, terminate instances from the Manage tab
- Provisioning — launch new EC2 instances (AMI, instance type, VPC, subnet, security groups, key pair)
- Health check — validates credentials via
sts:GetCallerIdentityon startup
Node identifiers follow the format aws:<region>:<instance-id> (e.g. aws:us-east-1:i-0abc1234).
Tip: If running Pabawi on an EC2 instance, you can omit AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and attach an IAM instance profile instead — the default AWS credential chain is used.
Inventory
The Inventory page aggregates nodes from all enabled integrations. Each node shows its source integration (colour-coded), transport type, and connection address.
- Search — type to filter by node name (300ms debounce)
- Filter by integration — show nodes from a single source
- Virtual scrolling — handles thousands of nodes without performance issues
- Groups — Bolt inventory groups and Ansible groups are preserved and browsable
- Node detail — click any node to see facts, classes, recent reports, and execution history
Command Execution
Select one or more target nodes, choose an integration (Bolt, Ansible, or SSH), enter your command or select a task, and execute. Output streams in real-time via SSE.
Command Whitelist
When COMMAND_WHITELIST_ENABLED=true, only commands matching the whitelist may be executed. The whitelist supports exact matches and glob patterns:
COMMAND_WHITELIST=uptime,hostname,df -h,free -m,systemctl status *,ps aux
Execution Queue
Executions are queued and limited to CONCURRENT_EXECUTION_LIMIT parallel operations to prevent resource exhaustion. Queue status is visible in the UI.
VM Lifecycle Management
For Proxmox and AWS nodes, the node detail page includes a Manage tab with lifecycle controls. Available actions depend on the current resource state:
| State | Available Actions |
|---|---|
| stopped | Start, Destroy |
| running | Stop, Shutdown, Reboot, Suspend (VMs), Destroy |
| suspended | Resume, Destroy |
- Destructive actions (Destroy, Terminate) require an explicit confirmation dialog
- Each action requires the corresponding RBAC permission (e.g.
manage:vms) - All lifecycle actions are recorded in the Node Journal and Execution History
- Real-time status updates are shown during the operation
Execution History
Every command, task, and package operation is stored in the SQLite database. The History page shows all runs with:
- Timestamp, duration, target nodes, integration used
- Full command/task name and parameters
- Per-node success/failure status with output
- One-click re-run — replay any historical execution instantly
Node Journal v0.9+
Every node has a personal journal — a chronological timeline of every action, lifecycle event, and operation that has touched it. Access it from the node detail page.
- Records command executions, task runs, package operations, and VM lifecycle actions
- Timestamps, actor (which user), integration used, and outcome per entry
- Persisted in SQLite — survives restarts, gives you long-term per-node history
- Complements Execution History (which is fleet-wide) with a node-centric view
RBAC & Permissions
Pabawi includes a full role-based access control system:
- Users — individual accounts with JWT authentication
- Roles — named sets of permissions (e.g. operator, viewer, admin)
- Groups — users grouped for easier role assignment
- Permissions — granular:
execute:commands,read:inventory,manage:users, etc.
The default admin account is created on first start using DEFAULT_ADMIN_USERNAME and DEFAULT_ADMIN_PASSWORD. Change the password immediately after first login.
Integration Status Dashboard v1.0
The Integration Status Dashboard gives you a real-time health view of every configured integration. Access it from the main navigation under Settings → Integrations.
- Shows enabled/disabled status and connection health for each integration
- Test Connection buttons for Proxmox and AWS — trigger an on-demand health check
- Displays last-checked timestamp and error details if a connection fails
- Read-only — configuration is managed via
.env, not the UI - Append
?refresh=trueto the API health endpoint to force a fresh check bypassing cache
Tip: Use the Status Dashboard after editing your .env and restarting to quickly confirm all integrations connected successfully before handing the system to the team.
Setup Wizards & .env Snippets v1.0
Each integration has an in-browser setup wizard that guides you through configuration step by step and generates a ready-to-paste .env snippet.
How it works
- Navigate to Settings → Setup Guides and choose an integration
- Fill in the form fields (host, credentials, paths, options)
- The wizard generates the corresponding
.envvariable block - Click Copy to clipboard and paste into your
backend/.env - Restart Pabawi — the integration activates immediately
Security note: Sensitive values (passwords, tokens, secret keys) are masked in the generated snippet preview. The snippet is never stored in the database — it exists only in your browser session until you copy it.
Why .env only?
From v1.0 onward, backend/.env is the single source of truth for all integration configuration. Previous versions allowed database-stored config overrides; this has been removed to simplify the mental model — what's in .env is exactly what runs.
Expert Mode
Toggle Expert Mode from the navigation bar to reveal:
- Full CLI command lines for every Bolt/Ansible/SSH operation
- Raw API responses and timing information
- Frontend debug logging in the browser console
- Extended node detail data
Expert Mode is per-session and does not require a page reload. Invaluable for debugging integration issues or verifying exactly what command is being run.
API Reference
Pabawi exposes a REST API at /api/v1/. All endpoints require a Bearer token when AUTH_ENABLED=true.
Authentication
# Login and get a token POST /api/v1/auth/login Body: { "username": "admin", "password": "..." } # Use the token in subsequent requests GET /api/v1/inventory Header: Authorization: Bearer <token>
Key Endpoints
| Method + Path | Description |
|---|---|
| GET /api/v1/inventory | List all nodes from all enabled integrations |
| GET /api/v1/inventory/:node | Get details and facts for a single node |
| POST /api/v1/execute/command | Run a command on target nodes |
| POST /api/v1/execute/task | Run a Bolt task on target nodes |
| GET /api/v1/executions | List execution history |
| GET /api/v1/executions/:id | Get a specific execution result |
| GET /api/v1/stream/:id | SSE stream for live execution output |
| GET /api/v1/reports | List Puppet run reports (PuppetDB required) |
| GET /api/v1/hiera/lookup | Resolve a Hiera key for a node |
| GET /api/v1/health | Integration health status |
| GET /api/v1/users | List users (admin only) |
| POST /api/v1/users | Create a user (admin only) |
Environment Variables
Complete reference for all configuration variables. See backend/.env.example in the repo for the annotated template.
# ── Core ────────────────────────────────────────── PORT=3000 HOST=0.0.0.0 LOG_LEVEL=info DATABASE_PATH=./data/pabawi.db # ── Authentication ──────────────────────────────── AUTH_ENABLED=true JWT_SECRET=your-secret-here-min-32-chars DEFAULT_ADMIN_USERNAME=admin DEFAULT_ADMIN_PASSWORD=changeme # ── Execution ───────────────────────────────────── CONCURRENT_EXECUTION_LIMIT=5 COMMAND_WHITELIST_ENABLED=true COMMAND_WHITELIST=uptime,hostname,df -h,free -m # ── Bolt ────────────────────────────────────────── BOLT_ENABLED=false BOLT_PROJECT_PATH=/path/to/bolt-project # ── SSH ─────────────────────────────────────────── SSH_ENABLED=true SSH_DEFAULT_USER=ubuntu SSH_DEFAULT_KEY_PATH=~/.ssh/id_rsa SSH_INVENTORY=[{"name":"web01","host":"192.168.1.10"}] # ── PuppetDB ────────────────────────────────────── PUPPETDB_ENABLED=false PUPPETDB_URL=https://puppet.example.com:8081 PUPPETDB_SSL_CERT=/etc/puppetlabs/puppet/ssl/certs/node.pem PUPPETDB_SSL_KEY=/etc/puppetlabs/puppet/ssl/private_keys/node.pem PUPPETDB_SSL_CA=/etc/puppetlabs/puppet/ssl/certs/ca.pem # ── Puppetserver ────────────────────────────────── PUPPETSERVER_ENABLED=false PUPPETSERVER_URL=https://puppet.example.com:8140 # ── Hiera ───────────────────────────────────────── HIERA_ENABLED=false HIERA_CONTROL_REPO_PATH=/path/to/control-repo # ── Ansible ─────────────────────────────────────── ANSIBLE_ENABLED=false ANSIBLE_PROJECT_PATH=/path/to/ansible-project ANSIBLE_INVENTORY_PATH=/path/to/inventory # ── Proxmox ─────────────────────────────────────── PROXMOX_ENABLED=false PROXMOX_HOST=proxmox.example.com PROXMOX_PORT=8006 PROXMOX_TOKEN=user@realm!tokenid=uuid PROXMOX_SSL_VERIFY=true PROXMOX_TIMEOUT=30000 # ── AWS ─────────────────────────────────────────── AWS_ENABLED=false AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY AWS_DEFAULT_REGION=us-east-1 AWS_REGIONS=["us-east-1","eu-west-1"]
Troubleshooting
Inventory is empty
Check that at least one integration is enabled (*_ENABLED=true) and that the CLI tools are on the PATH. Enable LOG_LEVEL=debug and check server logs. In the UI, toggle Expert Mode to see integration health status.
Command execution fails
- Verify the command is in the whitelist (or disable whitelist temporarily for testing)
- Check that the Pabawi server has network access to target nodes
- For SSH: verify key-based auth works from the Pabawi server manually
- For Bolt: run the command manually with
bolt command run "..." --targets ...
PuppetDB SSL errors
Ensure the certificate paths are correct and that the Pabawi server's certificate is signed by the same CA as PuppetDB. The pabawi process user must have read access to the key files.
Docker permission errors
Use --user "$(id -u):1001" to run the container as your user (group 1001 is the node user inside the container). Ensure mounted volumes are owned by that UID.
Still stuck? Open a GitHub issue with your Pabawi version, sanitized config, reproduction steps, and the error from LOG_LEVEL=debug output.