Skip to main content
InfraAudit’s background jobs run on a built-in cron scheduler embedded in the API process. It manages five core job types that keep your cloud data fresh. You can override the default schedule for each job, trigger any job manually, and inspect the full execution history.

How the scheduler works

At API startup, the scheduler registers each job with a cron expression read from environment variables. Each job fires at its scheduled time in a dedicated goroutine. When a job fires:
  1. A job_execution record is created with status=running and a start timestamp.
  2. The job function runs with a timeout.
  3. On completion, the record is updated with status=succeeded or status=failed, the end timestamp, duration, and a log snippet.

Job types

Pulls the latest resource inventory from all connected providers.What it does:
  • Calls the cloud provider API for each connected account
  • Creates records for newly discovered resources
  • Updates configuration snapshots for existing resources
  • Marks resources as deleted when they no longer appear in API responses
  • Captures a new baseline for any resource whose configuration changed
Default schedule: 0 */6 * * *
Compares current resource state against baselines and creates drift findings.What it does:
  • For each active resource with a baseline, runs the JSON diff algorithm
  • Creates new drift records for detected differences
  • Resolves existing drift records where the configuration has returned to the baseline state
  • Triggers recommendation generation for new critical and high-severity drifts
Default schedule: 0 */4 * * *
Scans container images and resource artifacts for CVEs.What it does:
  • Identifies scannable artifacts (container images from Kubernetes pods, EC2 AMIs)
  • Runs Trivy against each artifact
  • Enriches findings with NVD metadata (CVSS scores and descriptions)
  • Creates or updates vulnerability records
  • Closes findings for artifacts that no longer have the vulnerability
Default schedule: 0 2 * * *
Fetches and stores billing data from all connected cloud providers.What it does:
  • Calls AWS Cost Explorer, GCP BigQuery, or Azure Cost Management for each connected provider
  • Inserts daily cost records into the database
  • Runs the anomaly detection check on the new data point
  • Generates and caches updated cost forecasts
Default schedule: 0 3 * * *
Runs all enabled compliance frameworks against current resource snapshots.What it does:
  • Evaluates all controls for each enabled framework against cached resource configuration
  • Creates assessment and control_result records
  • Triggers alerts for controls that newly fail
  • Triggers recommendation generation for failed controls
Default schedule: 0 4 * * *

Trigger a job manually

You don’t have to wait for the next scheduled run. Trigger any job on demand:
# Trigger immediately and return
infraudit job trigger <job-id>

# Trigger and block until the job completes
infraudit job trigger <job-id> --wait
Via the API:
curl -X POST http://localhost:8080/api/v1/jobs/<job-id>/trigger \
  -H "Authorization: Bearer $TOKEN"

View execution history

Each job execution record stores the job type, status, start and end times, duration, log output (last 1,000 lines), and any error message.
# List recent job executions
infraudit job list

# Show the last 10 executions of the drift detection job
infraudit job executions --job-type drift_detection --limit 10

Override the default schedule

Set any job’s schedule via environment variable before starting the API:
RESOURCE_SYNC_SCHEDULE=0 */3 * * *
DRIFT_DETECTION_SCHEDULE=0 */2 * * *
VULNERABILITY_SCAN_SCHEDULE=0 1 * * *
COST_SYNC_SCHEDULE=0 4 * * *
COMPLIANCE_CHECK_SCHEDULE=0 5 * * *

Leader election for multi-instance deployments

When you run multiple API instances, only one should execute scheduled jobs to avoid duplicate scans and findings. InfraAudit implements leader election via a database lock in the jobs table. Each instance tries to acquire the lock at startup. Only the leader instance runs the scheduler. If the leader goes down, another instance acquires the lock within 60 seconds.
Leader election requires all API instances to share the same PostgreSQL database. If instances are using separate databases, each will run its own scheduler independently.