AI Runtime Security¶
Your AI passed every test. It still hallucinated in production.
Most organisations have no controls between the model and the damage it can do. AIRS is a vendor-neutral, risk-proportionate framework for running AI safely in production: layered runtime controls you can match to your actual risk, not a compliance checklist.
Three domains, one framework. Foundation secures single-agent systems, MASO secures multi-agent orchestration, and Infrastructure secures the platforms underneath. The SDK turns all three into code.
New here? Start with what AI Runtime Security is.
Three Questions. Three Doors.¶
-
How do I run AI securely?
Ship your first LLM feature with the controls that matter most. Seven controls, one checklist, one decision tree for whether you need to go deeper.
Start · AIRSLite · Quick Start
-
How do I secure AI while it is running?
The framework itself: four independent control layers for single-agent systems, ten control domains for multi-agent orchestration, PACE resilience for graceful degradation.
-
How do I get the most out of AI safely?
Role-specific entry points. Each page tells you what matters for your role, why, where to start reading, and what you can do on Monday morning.
Find Your Role¶
Nine role-specific entry points. Each one frames AI runtime security through the lens of a single job, with a starting path, Monday-morning actions, and answers to the pushback you will get.
-
How do I secure AI when the threat model is unlike anything I've secured before?
-
How do I quantify AI risk and prove to the board that controls are working?
-
How do I demonstrate that AI deployments meet regulatory obligations, with evidence?
-
Your programme already solves the problem AI agents create. How do you extend it?
-
How do I govern AI across my technology portfolio when every product runs different agents?
-
Where do controls go in my pipeline, what do they cost, and how do they fail?
-
What do I actually build? Give me implementation patterns, not governance theory.
-
How do I manage AI risk across my product lines when agents are operational?
-
What controls are required to ship AI, and what do they cost in time and money?
All nine roles, grouped and explained
Framework at a Glance¶
| Layer | What It Covers | Entry Point |
|---|---|---|
| Foundation | Three-layer behavioural controls for single-agent deployments. 80 infrastructure controls across 11 domains. | Architecture |
| MASO | Ten control domains for multi-agent orchestration. PACE resilience. OWASP Agentic Top 10 coverage. | MASO |
| Implementation | Platform patterns for AWS, Azure, Databricks. Tool access controls. Agentic infrastructure. | Infrastructure |
| SDK | Python reference implementation. Guardrails, judge evaluation, circuit breakers in code. | SDK |
Four Control Layers¶
A runtime control plane for AI behaviour. Each layer operates independently, and each can run in detect-only mode before you graduate it to enforcing. No single failure compromises the system.
-
Guardrails
Fast, deterministic boundaries: content policies, scope constraints, tool-use permissions. Catches the obvious failures at machine speed. ~10ms per check.
-
Model-as-Judge
A separate model evaluates outputs against policy, context, and intent before they reach users. Catches the subtle failures guardrails miss. ~500ms to 5s, sync or async by risk tier.
-
Human Oversight
Escalation paths, audit trails, and intervention capability for high-stakes decisions. Scope scales with consequence.
-
Circuit Breakers
Emergency failsafes that halt AI operations and activate safe fallbacks when controls fail or compromise is confirmed.
How the layers work together · End-to-end walkthrough: the Chevrolet $1 chatbot · Cost & latency by tier
Insights¶
The why before the how. Each article identifies a specific problem that the controls then solve.
Why guardrails aren't enough · The MCP problem · The orchestrator problem · What works · All insights
Related¶
-
AI Secured by Design
Shifts security left, embedding it into AI systems from the start rather than bolting it on after deployment.
-
MASO Learning Site
Structured guides, walkthroughs, and practical examples for the Multi-Agent Security Operations framework.
Created by Jonathan Gill · feedback@airuntimesecurity.io