Documentation

Everything you need to install, configure, and use Pabawi to manage your infrastructure.

Quick Start

The fastest way to get Pabawi running is with the interactive setup script. It handles prerequisite checks, config file generation, dependency installation, and first launch.

bash
$ git clone https://github.com/example42/pabawi
$ cd pabawi
$ ./scripts/setup.sh

The setup wizard will:

  1. Check for Node.js 20+, npm 9+, and optionally Bolt, Ansible, Puppet CLIs
  2. Generate backend/.env with smart defaults based on detected tools and SSL certs
  3. Run npm run install:all
  4. Ask whether to start in dev mode, full-stack mode, or exit

Once running, open http://localhost:3000 in your browser.

Tip: You don't need Puppet, Bolt, or Ansible to get started. Enable SSH integration only and Pabawi works as a standalone web UI for your servers.

Prerequisites

Hard requirements (always needed):

  • Node.js 20+ and npm 9+ — or use the Docker image which bundles everything

Soft requirements (only needed for specific integrations):

  • Bolt CLI — for Bolt integration (task/command execution via Bolt)
  • Ansible CLI — for Ansible integration
  • Puppet/OpenVox agent — provides SSL certs for PuppetDB and Puppetserver integrations
  • Control repo — for Hiera integration (local path to your Puppet control repository)
  • SSH access — configured SSH keys/config for SSH integration

Manual Setup

If you prefer to configure things by hand:

bash
$ git clone https://github.com/example42/pabawi && cd pabawi
$ npm run install:all
$ cp backend/.env.example backend/.env
# Edit backend/.env with your settings

# Development mode (two terminals):
$ npm run dev:backend    # port 3000
$ npm run dev:frontend   # port 5173, proxies API to 3000

# Or build frontend and serve everything from backend:
$ npm run dev:fullstack   # port 3000

Docker Deployment

Pabawi ships as a single container image (example42/pabawi) supporting amd64 and arm64.

bash
# Create a directory to hold all Pabawi data
$ mkdir pabawi && cd pabawi

# Create your .env (see environment variables section)
$ vi .env

# Run the container
$ docker run -d \
  --name pabawi \
  --user "$(id -u):1001" \
  -p 127.0.0.1:3000:3000 \
  -v "$(pwd):/pabawi" \
  --env-file ".env" \
  example42/pabawi:latest

Mount points inside the container:

PathPurpose
/pabawi/dataSQLite database storage
/pabawi/certsSSL certs for PuppetDB/Puppetserver
/pabawi/control-repoPuppet control repo for Hiera
/pabawi/bolt-projectBolt project directory

Note: All paths in .env are relative to the container's filesystem, not your host.

Configuration

All configuration lives in backend/.env. Use backend/.env.example as a reference. Never commit .env to version control.

The ConfigService wraps all environment variables with Zod validation — missing required vars cause a clear startup error rather than a silent runtime failure.

Core Settings

VariableDefaultDescription
PORT3000HTTP server port
HOST0.0.0.0Bind address
LOG_LEVELinfodebug / info / warn / error
DATABASE_PATH./data/pabawi.dbSQLite database file path
CONCURRENT_EXECUTION_LIMIT5Max parallel command executions
COMMAND_WHITELIST_ENABLEDtrueEnable/disable command whitelist
COMMAND_WHITELIST(see below)Comma-separated list of allowed commands
CACHE_INVENTORY_TTL30Inventory cache TTL (seconds)
CACHE_FACTS_TTL300Facts cache TTL (seconds)

Authentication

VariableDefaultDescription
AUTH_ENABLEDtrueEnable JWT authentication
JWT_SECRETRequired. Secret key for JWT signing (min 32 chars)
JWT_EXPIRY24hToken expiry duration
DEFAULT_ADMIN_USERNAMEadminInitial admin account username
DEFAULT_ADMIN_PASSWORDInitial admin password (change on first login)

Security: Always set a strong JWT_SECRET. Run openssl rand -hex 32 to generate one. Never reuse this value across environments.

Integrations Overview

Pabawi uses a plugin architecture. Each integration is independent — disable what you don't need, and the rest continue working. The IntegrationManager fans out inventory and fact requests to all enabled plugins and merges results using a priority system.

PluginPriorityExecutionInventoryProvisioning
SSH50 (highest)
Puppetserver20
Bolt10
PuppetDB10
Ansible8
Hiera6
Proxmox✓ VMs & LXC
AWS✓ EC2

Bolt Integration

Bolt provides command execution, task execution, and inventory discovery. Pabawi spawns the Bolt CLI directly — your existing Bolt project and inventory are used as-is.

VariableDescription
BOLT_ENABLEDtrue / false
BOLT_PROJECT_PATHAbsolute path to your Bolt project directory
BOLT_COMMAND_TIMEOUTExecution timeout in seconds (default: 120)
BOLT_INVENTORY_FILEPath to inventory.yaml (default: auto-detected)

Note: The Bolt user must have read access to the project directory and network access to target nodes. Bolt tasks are automatically discovered from the project's modules/ directory.

PuppetDB Integration

PuppetDB provides rich inventory, node facts, Puppet run reports, and event data. Supports both Puppet Enterprise and Open Source Puppet / OpenVox.

VariableDescription
PUPPETDB_ENABLEDtrue / false
PUPPETDB_URLPuppetDB URL (e.g. https://puppet.example.com:8081)
PUPPETDB_SSL_CERTPath to client SSL certificate (.pem)
PUPPETDB_SSL_KEYPath to client SSL private key (.pem)
PUPPETDB_SSL_CAPath to CA certificate
PUPPETDB_TOKENPE RBAC token (alternative to SSL certs)

Puppetserver Integration

Puppetserver integration provides node certificate listing, server status, and catalog compilation.

VariableDescription
PUPPETSERVER_ENABLEDtrue / false
PUPPETSERVER_URLPuppetserver URL (e.g. https://puppet.example.com:8140)
PUPPETSERVER_SSL_CERTPath to client certificate
PUPPETSERVER_SSL_KEYPath to client private key
PUPPETSERVER_SSL_CAPath to CA certificate
PUPPETSERVER_ENVIRONMENTDefault environment (default: production)

Hiera Integration

Browse and resolve Hiera data hierarchically, with fact-based interpolation and class-aware key analysis. Requires a local copy of your Puppet control repository.

VariableDescription
HIERA_ENABLEDtrue / false
HIERA_CONTROL_REPO_PATHAbsolute path to your control repo
HIERA_ENVIRONMENTSComma-separated environment list (default: production)
HIERA_DEFAULT_ENVIRONMENTDefault environment for lookups

Ansible Integration

Run ad-hoc commands, execute playbooks, and discover inventory via Ansible. Pabawi spawns the Ansible CLI using your existing project.

VariableDescription
ANSIBLE_ENABLEDtrue / false
ANSIBLE_PROJECT_PATHAbsolute path to your Ansible project
ANSIBLE_INVENTORY_PATHPath to inventory file or directory
ANSIBLE_COMMAND_TIMEOUTExecution timeout in seconds (default: 120)
ANSIBLE_VAULT_PASSWORD_FILEPath to vault password file (optional)

SSH Integration

Direct SSH command execution — the simplest integration. Works standalone without any other tools. Ideal for homelabs or legacy nodes outside Puppet/Ansible.

VariableDescription
SSH_ENABLEDtrue / false
SSH_CONFIG_PATHPath to SSH config file (default: ~/.ssh/config)
SSH_DEFAULT_USERDefault SSH user for connections
SSH_DEFAULT_KEY_PATHDefault private key path
SSH_USE_SUDOPrepend sudo to commands (default: false)
SSH_CONNECTION_TIMEOUTConnection timeout in seconds (default: 10)
SSH_MAX_CONNECTIONSConnection pool size (default: 10)
SSH_INVENTORYStatic host list (JSON array of host objects)

Tip: Set SSH_INVENTORY to a JSON array to define hosts when no other inventory source is configured: SSH_INVENTORY=[{"name":"web01","host":"192.168.1.10"}]

Proxmox Integration v0.9+

Proxmox integration provides VM and LXC container inventory discovery, lifecycle management (start/stop/reboot/destroy), and provisioning — all alongside your existing config management tools.

Authentication

Token-based authentication is recommended. Create a dedicated API token in the Proxmox UI under Datacenter → Permissions → API Tokens with appropriate privileges.

VariableDescription
PROXMOX_ENABLEDtrue / false
PROXMOX_HOSTProxmox host (e.g. proxmox.example.com)
PROXMOX_PORTAPI port (default: 8006)
PROXMOX_TOKENAPI token: user@realm!tokenid=uuid (recommended)
PROXMOX_USERNAMEUsername for password auth (alternative to token)
PROXMOX_PASSWORDPassword for password auth
PROXMOX_REALMAuth realm (default: pam)
PROXMOX_SSL_VERIFYVerify SSL certificate (default: true)
PROXMOX_TIMEOUTAPI timeout in ms (default: 30000)

Capabilities

  • Inventory — auto-discover all VMs and containers; group by node, status (running/stopped), or type (QEMU/LXC)
  • Lifecycle — start, stop, shutdown, reboot, suspend (VMs), resume, destroy — with confirmation dialogs for destructive actions
  • Provisioning — create new VMs and LXC containers with full parameter control via the Manage tab
  • Facts — CPU, memory, disk, network config, status, uptime

Node identifiers follow the format proxmox:<node>:<vmid> (e.g. proxmox:pve1:101).

IAM note: The Proxmox API token must have at minimum VM.PowerMgmt, VM.Audit, and VM.Allocate privileges on the relevant resource pools.

AWS EC2 Integration v0.10+

AWS integration discovers EC2 instances across one or more regions, manages their lifecycle, and provisions new instances — making cloud VMs first-class citizens alongside your on-prem fleet.

VariableDescription
AWS_ENABLEDtrue / false
AWS_ACCESS_KEY_IDIAM access key ID
AWS_SECRET_ACCESS_KEYIAM secret access key
AWS_DEFAULT_REGIONDefault region (e.g. us-east-1)
AWS_REGIONSJSON array of regions to discover (e.g. ["us-east-1","eu-west-1"])
AWS_PROFILEAWS CLI profile name (alternative to access keys)
AWS_SESSION_TOKENSTS session token for temporary credentials

Required IAM Permissions

# Minimum IAM permissions required
ec2:DescribeInstances, DescribeRegions, DescribeInstanceTypes
ec2:DescribeImages, DescribeVpcs, DescribeSubnets
ec2:DescribeSecurityGroups, DescribeKeyPairs
ec2:RunInstances, StartInstances, StopInstances
ec2:RebootInstances, TerminateInstances
sts:GetCallerIdentity

Capabilities

  • Inventory — discover EC2 instances across all configured regions; group by region, VPC, or tags
  • Lifecycle — start, stop, reboot, terminate instances from the Manage tab
  • Provisioning — launch new EC2 instances (AMI, instance type, VPC, subnet, security groups, key pair)
  • Health check — validates credentials via sts:GetCallerIdentity on startup

Node identifiers follow the format aws:<region>:<instance-id> (e.g. aws:us-east-1:i-0abc1234).

Tip: If running Pabawi on an EC2 instance, you can omit AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and attach an IAM instance profile instead — the default AWS credential chain is used.

Inventory

The Inventory page aggregates nodes from all enabled integrations. Each node shows its source integration (colour-coded), transport type, and connection address.

  • Search — type to filter by node name (300ms debounce)
  • Filter by integration — show nodes from a single source
  • Virtual scrolling — handles thousands of nodes without performance issues
  • Groups — Bolt inventory groups and Ansible groups are preserved and browsable
  • Node detail — click any node to see facts, classes, recent reports, and execution history

Command Execution

Select one or more target nodes, choose an integration (Bolt, Ansible, or SSH), enter your command or select a task, and execute. Output streams in real-time via SSE.

Command Whitelist

When COMMAND_WHITELIST_ENABLED=true, only commands matching the whitelist may be executed. The whitelist supports exact matches and glob patterns:

COMMAND_WHITELIST=uptime,hostname,df -h,free -m,systemctl status *,ps aux

Execution Queue

Executions are queued and limited to CONCURRENT_EXECUTION_LIMIT parallel operations to prevent resource exhaustion. Queue status is visible in the UI.

VM Lifecycle Management

For Proxmox and AWS nodes, the node detail page includes a Manage tab with lifecycle controls. Available actions depend on the current resource state:

StateAvailable Actions
stoppedStart, Destroy
runningStop, Shutdown, Reboot, Suspend (VMs), Destroy
suspendedResume, Destroy
  • Destructive actions (Destroy, Terminate) require an explicit confirmation dialog
  • Each action requires the corresponding RBAC permission (e.g. manage:vms)
  • All lifecycle actions are recorded in the Node Journal and Execution History
  • Real-time status updates are shown during the operation

Execution History

Every command, task, and package operation is stored in the SQLite database. The History page shows all runs with:

  • Timestamp, duration, target nodes, integration used
  • Full command/task name and parameters
  • Per-node success/failure status with output
  • One-click re-run — replay any historical execution instantly

Node Journal v0.9+

Every node has a personal journal — a chronological timeline of every action, lifecycle event, and operation that has touched it. Access it from the node detail page.

  • Records command executions, task runs, package operations, and VM lifecycle actions
  • Timestamps, actor (which user), integration used, and outcome per entry
  • Persisted in SQLite — survives restarts, gives you long-term per-node history
  • Complements Execution History (which is fleet-wide) with a node-centric view

RBAC & Permissions

Pabawi includes a full role-based access control system:

  • Users — individual accounts with JWT authentication
  • Roles — named sets of permissions (e.g. operator, viewer, admin)
  • Groups — users grouped for easier role assignment
  • Permissions — granular: execute:commands, read:inventory, manage:users, etc.

The default admin account is created on first start using DEFAULT_ADMIN_USERNAME and DEFAULT_ADMIN_PASSWORD. Change the password immediately after first login.

Integration Status Dashboard v1.0

The Integration Status Dashboard gives you a real-time health view of every configured integration. Access it from the main navigation under Settings → Integrations.

  • Shows enabled/disabled status and connection health for each integration
  • Test Connection buttons for Proxmox and AWS — trigger an on-demand health check
  • Displays last-checked timestamp and error details if a connection fails
  • Read-only — configuration is managed via .env, not the UI
  • Append ?refresh=true to the API health endpoint to force a fresh check bypassing cache

Tip: Use the Status Dashboard after editing your .env and restarting to quickly confirm all integrations connected successfully before handing the system to the team.

Setup Wizards & .env Snippets v1.0

Each integration has an in-browser setup wizard that guides you through configuration step by step and generates a ready-to-paste .env snippet.

How it works

  1. Navigate to Settings → Setup Guides and choose an integration
  2. Fill in the form fields (host, credentials, paths, options)
  3. The wizard generates the corresponding .env variable block
  4. Click Copy to clipboard and paste into your backend/.env
  5. Restart Pabawi — the integration activates immediately

Security note: Sensitive values (passwords, tokens, secret keys) are masked in the generated snippet preview. The snippet is never stored in the database — it exists only in your browser session until you copy it.

Why .env only?

From v1.0 onward, backend/.env is the single source of truth for all integration configuration. Previous versions allowed database-stored config overrides; this has been removed to simplify the mental model — what's in .env is exactly what runs.

Expert Mode

Toggle Expert Mode from the navigation bar to reveal:

  • Full CLI command lines for every Bolt/Ansible/SSH operation
  • Raw API responses and timing information
  • Frontend debug logging in the browser console
  • Extended node detail data

Expert Mode is per-session and does not require a page reload. Invaluable for debugging integration issues or verifying exactly what command is being run.

API Reference

Pabawi exposes a REST API at /api/v1/. All endpoints require a Bearer token when AUTH_ENABLED=true.

Authentication

# Login and get a token
POST /api/v1/auth/login
Body: { "username": "admin", "password": "..." }

# Use the token in subsequent requests
GET  /api/v1/inventory
Header: Authorization: Bearer <token>

Key Endpoints

Method + PathDescription
GET /api/v1/inventoryList all nodes from all enabled integrations
GET /api/v1/inventory/:nodeGet details and facts for a single node
POST /api/v1/execute/commandRun a command on target nodes
POST /api/v1/execute/taskRun a Bolt task on target nodes
GET /api/v1/executionsList execution history
GET /api/v1/executions/:idGet a specific execution result
GET /api/v1/stream/:idSSE stream for live execution output
GET /api/v1/reportsList Puppet run reports (PuppetDB required)
GET /api/v1/hiera/lookupResolve a Hiera key for a node
GET /api/v1/healthIntegration health status
GET /api/v1/usersList users (admin only)
POST /api/v1/usersCreate a user (admin only)

Environment Variables

Complete reference for all configuration variables. See backend/.env.example in the repo for the annotated template.

# ── Core ──────────────────────────────────────────
PORT=3000
HOST=0.0.0.0
LOG_LEVEL=info
DATABASE_PATH=./data/pabawi.db

# ── Authentication ────────────────────────────────
AUTH_ENABLED=true
JWT_SECRET=your-secret-here-min-32-chars
DEFAULT_ADMIN_USERNAME=admin
DEFAULT_ADMIN_PASSWORD=changeme

# ── Execution ─────────────────────────────────────
CONCURRENT_EXECUTION_LIMIT=5
COMMAND_WHITELIST_ENABLED=true
COMMAND_WHITELIST=uptime,hostname,df -h,free -m

# ── Bolt ──────────────────────────────────────────
BOLT_ENABLED=false
BOLT_PROJECT_PATH=/path/to/bolt-project

# ── SSH ───────────────────────────────────────────
SSH_ENABLED=true
SSH_DEFAULT_USER=ubuntu
SSH_DEFAULT_KEY_PATH=~/.ssh/id_rsa
SSH_INVENTORY=[{"name":"web01","host":"192.168.1.10"}]

# ── PuppetDB ──────────────────────────────────────
PUPPETDB_ENABLED=false
PUPPETDB_URL=https://puppet.example.com:8081
PUPPETDB_SSL_CERT=/etc/puppetlabs/puppet/ssl/certs/node.pem
PUPPETDB_SSL_KEY=/etc/puppetlabs/puppet/ssl/private_keys/node.pem
PUPPETDB_SSL_CA=/etc/puppetlabs/puppet/ssl/certs/ca.pem

# ── Puppetserver ──────────────────────────────────
PUPPETSERVER_ENABLED=false
PUPPETSERVER_URL=https://puppet.example.com:8140

# ── Hiera ─────────────────────────────────────────
HIERA_ENABLED=false
HIERA_CONTROL_REPO_PATH=/path/to/control-repo

# ── Ansible ───────────────────────────────────────
ANSIBLE_ENABLED=false
ANSIBLE_PROJECT_PATH=/path/to/ansible-project
ANSIBLE_INVENTORY_PATH=/path/to/inventory

# ── Proxmox ───────────────────────────────────────
PROXMOX_ENABLED=false
PROXMOX_HOST=proxmox.example.com
PROXMOX_PORT=8006
PROXMOX_TOKEN=user@realm!tokenid=uuid
PROXMOX_SSL_VERIFY=true
PROXMOX_TIMEOUT=30000

# ── AWS ───────────────────────────────────────────
AWS_ENABLED=false
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWS_DEFAULT_REGION=us-east-1
AWS_REGIONS=["us-east-1","eu-west-1"]

Troubleshooting

Inventory is empty

Check that at least one integration is enabled (*_ENABLED=true) and that the CLI tools are on the PATH. Enable LOG_LEVEL=debug and check server logs. In the UI, toggle Expert Mode to see integration health status.

Command execution fails

  • Verify the command is in the whitelist (or disable whitelist temporarily for testing)
  • Check that the Pabawi server has network access to target nodes
  • For SSH: verify key-based auth works from the Pabawi server manually
  • For Bolt: run the command manually with bolt command run "..." --targets ...

PuppetDB SSL errors

Ensure the certificate paths are correct and that the Pabawi server's certificate is signed by the same CA as PuppetDB. The pabawi process user must have read access to the key files.

Docker permission errors

Use --user "$(id -u):1001" to run the container as your user (group 1001 is the node user inside the container). Ensure mounted volumes are owned by that UID.

Still stuck? Open a GitHub issue with your Pabawi version, sanitized config, reproduction steps, and the error from LOG_LEVEL=debug output.