ODIN HEL Policy Engine

Deterministic, lightweight egress policy engine for AI / LLM applications.
Profiles + Allowlists + (optional) Rego (OPA) = quick, auditable decisions.

HEL = Host Enforcement Layer

Deterministic

Pure function decisions: same input context → same allow/deny outcome. Simple to reason about & test.

Lightweight

Embed directly in your worker / service process. No network hop, no sidecar bloat.

Auditable

Decision traces with normalized context + rule id; export for compliance review.

Why HEL?

Modern AI systems call out to many model & data APIs. You need fast, explainable allow/deny decisions (and maybe a path to more sophisticated policy later) without dragging in a heavy gateway. The Host Enforcement Layer (HEL) gives you:

  • Profiles (env, service, tenant) for layered scoping
  • Allowlists / denylists for model families, vendors, regions
  • Structured predicates (latency, token count, cost ceilings)
  • Optional Rego injection (OPA) for advanced edge cases
  • Deterministic evaluation graph with hashable inputs
  • Decision receipts (JSON) for later replay & diffing

Example (Pseudo‑Code)

// profile: prod-default
allow_models: ["gpt-4o", "claude-3.5", "gemini-1.5-pro"]
deny_vendors: ["unknown"]
max_latency_ms: 4000
max_tokens: 16000
max_cost_usd: 0.50

# optional Rego (if enabled)
# package hel
# default allow = false
# allow { input.model == "gpt-4o"; input.cost_usd < 0.50 }

Single in-process evaluation ~O(1) over flattened keyed feature set; optional Rego executed only if base profile passes.

Pluggable Context

Inject request metadata, user tier, spend counters, model taxonomy facts.

Replay & Diff

Re-run recorded decisions after profile edits to see impact delta.

Language First

Ergonomic Python today; WASM core path for Go / Rust later.

Roadmap Highlights

  • WASM policy core w/ language bindings
  • OpenTelemetry decision span export
  • CLI profile validation & dry runs
  • Profile snapshot signing
  • In-memory LRU for hot context
  • VS Code schema completions