Examples & recipes
Copy-paste starting points for common scan scenarios. Pair each with the matching invocation; secrets stay in env vars.
1. Minimum viable config
Public site, passive-only, SARIF out.
version: "1"
name: "marketing-site"
target:
url: "https://marketing.example.com"
auth:
strategy: "none"
scan:
enable_passive: true
enable_active: false
reporting:
output: "sarif"
output_file: "shadownet-report.sarif"
shadownet scan
2. Form-authenticated web app
Commit-friendly config for a typical login-protected app.
version: "1"
name: "acme-webapp"
env: "${DEPLOY_ENV:-staging}"
target:
url: "https://app.example.com"
ignore_third_party: true
crawl:
type: "hybrid"
max_depth: 4
max_pages: 150
auth:
strategy: "form"
login_url: "https://app.example.com/login"
username: "${SCAN_USERNAME}"
scan:
depth: "smart"
enable_active: true
reporting:
output: "sarif"
output_file: "reports/shadownet.sarif"
fail_on: "high"
shadownet scan \
--password "$SCAN_PASSWORD" \
--org-id "$LEVOAI_ORG_ID" \
--env-id "$LEVOAI_ENV_ID"
3. API-only scan (Bearer token)
No crawler UI nonsense — hit the API, fuzz it, ship a report.
version: "1"
name: "acme-api"
target:
url: "https://api.example.com"
crawl:
type: "standard"
max_pages: 500
auth:
strategy: "token"
headers:
- "X-Scan-Source: shadownet"
scan:
enable_active: true
enable_graphql: true
http_methods: ["GET", "POST", "PUT", "PATCH", "DELETE"]
inject_locations: ["query", "body", "header", "path"]
reporting:
output: "json"
output_file: "reports/api-findings.json"
fail_on: "high"
shadownet scan --token "$SCAN_TOKEN"
4. CI-friendly (GitHub Actions)
Opinionated config for PR-gating scans — fast, Smart depth, fail on high.
# levo-dast.yml (checked into the app repo)
version: "1"
name: "acme-webapp"
env: "ci"
target:
url: "${TARGET_URL}"
crawl:
type: "standard"
max_depth: 2
max_pages: 50
auth:
strategy: "form"
login_url: "${LOGIN_URL}"
username: "${SCAN_USERNAME}"
scan:
depth: "smart"
enable_passive: true
enable_active: true
reporting:
output: "sarif"
output_file: "shadownet.sarif"
fail_on: "high"
send_issues: true
# .github/workflows/dast.yml
- name: DAST scan
run: |
docker run --rm \
-v "$PWD/levo-dast.yml:/work/levo-dast.yml:ro" \
-v "$PWD:/work/out" \
-w /work \
-e TARGET_URL -e LOGIN_URL -e SCAN_USERNAME -e SCAN_PASSWORD \
-e LEVOAI_AUTH_KEY -e LEVOAI_ORG_ID -e LEVOAI_ENV_ID \
ghcr.io/levoai/levoai-shadownet:latest scan
5. Thorough weekly release scan
Release-candidate scan that takes longer but digs deeper. Turns on every active-test category and cranks attack_strength to insane.
version: "1"
name: "acme-webapp"
env: "staging"
target:
url: "https://staging.example.com"
crawl:
type: "hybrid"
max_depth: 6
max_pages: 500
max_global_clicks: 2000
timeout: 1800
scan:
depth: "thorough"
attack_strength: "insane"
enable_ai: true
max_payloads: 200
timeout: 7200
test_timeout: 60
active_testing_categories:
xss: true
sqli: true
path_traversal: true
ssrf: true
xxe: true
command_injection: true
open_redirect: true
csrf: true
idor: true
cve:
js: true
dom: true
reporting:
output: "sarif"
output_file: "reports/weekly-${DEPLOY_ENV}.sarif"
fail_on: "medium"
6. Environment interpolation patterns
# Required variable — scan fails if SCAN_USERNAME is unset.
auth:
username: "${SCAN_USERNAME}"
# Optional with fallback on a top-level key.
env: "${DEPLOY_ENV:-staging}"
# Interpolation inside a URL.
target:
url: "https://${REGION:-us-east}.app.example.com"
# Interpolation in a header list.
auth:
headers:
- "X-Build-Sha: ${GITHUB_SHA}"
- "X-Run-Id: ${GITHUB_RUN_ID}"
7. What not to do
These will be rejected
auth:
password: "hunter2" # rejected — use $SCAN_PASSWORD
token: "eyJhbGciOi..." # rejected — use $SCAN_TOKEN
local_storage_b64: "eyJ..." # rejected — use $SCAN_LOCAL_STORAGE_B64
# No top-level `levo:` block — identity comes from CLI flags / env vars.
# --org-id / $LEVOAI_ORG_ID, --env-id / $LEVOAI_ENV_ID, --app-id / $LEVOAI_APP_ID
ai:
api_key: "sk-ant-..." # rejected — use $ANTHROPIC_API_KEY
Unknown top-level keys (app:, vulnerabilities:, timeouts:) are also rejected — the loader runs with extra = "forbid".
The loader prints the offending YAML path and the env var / flag to use instead.
Next
- Schema reference — every field.
- CI/CD integration — plug these configs into pipelines.
Was this page helpful?