Skip to content

AI Runtime Security

Your AI passed every test. It still hallucinated in production.

Most organisations have no controls between the model and the damage it can do. AIRS gives you four layers of runtime defence: guardrails, a judge model, human oversight, and circuit breakers, so you can match controls to your actual risk, not a compliance checklist.

Vendor-neutral. Risk-proportionate. Built for regulated industries.


Four Control Layers

Each layer operates independently. No single failure compromises the system.

  • Guardrails

    Fast, deterministic boundaries: content policies, scope constraints, tool-use permissions. Catches the obvious failures at machine speed.

  • Model-as-Judge

    A separate model evaluates outputs against policy, context, and intent before they reach users. Catches the subtle failures guardrails miss.

  • Human Oversight

    Escalation paths, audit trails, and intervention capability for high-stakes decisions. Scope scales with consequence.

  • Circuit Breakers

    Emergency failsafes that halt AI operations and activate safe fallbacks when controls fail or compromise is confirmed.

:octicons-arrow-right-24: How the layers work together


The Problem

AI security focuses almost entirely on the model layer: training data, prompt injection, pre-deployment red-teaming. This misses the point. The risk that matters is what the model does at runtime, in production, with real data and real users. Guardrails alone are a single point of failure. Process gates slow delivery without reducing harm. In every other security domain we layer controls and assume any single one will fail. AI security has not caught up.

Why AI security is a runtime problem


Start Here

  • New to AIRS?

    Seven controls you can implement in an afternoon. Enough runtime safety to go live, enough observability to learn, enough structure to decide where to invest next.

    Minimum Viable AIRS

  • Know Your Role?

    Entry points for CISOs, architects, risk teams, CIOs, product owners, AI engineers, compliance, and insider threat teams. Each page tells you what matters for your role, why, and where to start.

    Stakeholder views

  • Want the Full Framework?

    Reading paths organised by depth and interest. Pick a track and follow it.

    Start here


Multi-Agent Security (MASO)

When agents coordinate autonomously, every single-agent risk compounds. An injection in one agent propagates through inter-agent messages. Hallucinations become another agent's facts. Delegation creates transitive authority chains nobody authorised.

MASO adds ten control domains, three implementation tiers, and PACE resilience to handle what single-agent controls cannot: inter-agent communication integrity, non-human identity management, execution containment, and kill switch architecture.

MASO Framework · Interactive Demo


Framework at a Glance

Layer What It Covers Entry Point
Foundation Three-layer behavioural controls for single-agent deployments. 80 infrastructure controls across 11 domains. Architecture
MASO Ten control domains for multi-agent orchestration. PACE resilience. OWASP Agentic Top 10 coverage. MASO
Implementation Platform patterns for AWS, Azure, Databricks. Tool access controls. Agentic infrastructure. Infrastructure
SDK Python reference implementation. Guardrails, judge evaluation, circuit breakers in code. SDK

Insights

The why before the how. Each article identifies a specific problem that the controls then solve.

Foundations: Why guardrails aren't enough · Infrastructure beats instructions · Humans remain accountable

Emerging challenges: The MCP problem · The orchestrator problem · When agents talk to agents · The long-horizon problem

Analysis: What works · What scales · State of reality · The constraint curve

All insights


Regulatory Alignment

The framework maps to EU AI Act (Articles 9, 14, 15), NIST AI RMF, ISO 42001, OWASP LLM Top 10 (2025), OWASP Agentic Top 10 (2026), DORA, and APRA CPS 234. Effective controls generate compliance evidence as a by-product of normal operation.

EU AI Act crosswalk


  • AI Secured by Design

    Shifts security left, embedding it into AI systems from the start rather than bolting it on after deployment.

    aisecuredbydesign.io

  • MASO Learning Site

    Structured guides, walkthroughs, and practical examples for the Multi-Agent Security Operations framework.

    airuntimesecurity.co.za