Library¶
Everything behind the framework. Research articles, implementation guidance, platform patterns, reference code, regulatory mappings, templates, and examples. Use the cards below to find the right shelf.
Reading and research¶
-
The why before the how. Forty-six research articles grouped into six themes: foundations, architecture, threats, agentic AI, models and technology, and evidence and analysis. Start with the Core Six if you are new to the thesis.
-
Curated AI runtime security news linked to the framework's controls and domains. Useful for tracking where the threat landscape is moving and which controls new incidents validate.
Implementation guidance¶
-
Eighty infrastructure controls across seven domains (identity, logging, network, data, secrets, supply chain, incident response) plus agentic extensions (sandboxing, delegation, tool access). Includes standards mappings to ISO 42001, NIST, OWASP, and platform patterns for AWS, Azure, and Databricks.
-
The guided path from "we have a business problem" to "we have a governed AI system in production". Twelve articles covering business alignment, use-case filtering, data reality, human factors, progression, framework tensions, maturity levels, and the return loop that keeps systems honest after launch.
-
Python reference implementation. Guardrails, Judge evaluation, circuit breakers, PACE resilience, pipeline, agent security, telemetry, FastAPI integration, examples, and what the tests prove. The framework, in runnable code.
Reference¶
-
Standards alignment: EU AI Act crosswalk and risk tiering, ISO 42001 alignment and clause mapping, ISO 27001 alignment, NIST IR 8596, ETSI SAI, AI governance operating model, and high-risk financial services guidance.
-
Deep dives grouped by purpose: judge internals (model selection, distillation, precedents), detection and SOC (integration, content packs, anomaly detection, graph monitoring), control catalogues (agentic, endpoint hardening, RAG security), and economics and identity (cost and latency, economic governance, NHI lifecycle).
-
Ready-to-use artefacts: AI incident playbook, threat model template, judge prompt examples, data retention guidance, testing guidance, vendor assessment questionnaire, model card template.
-
Worked examples of the framework applied end-to-end: customer service AI, internal doc assistant, credit decision support, high-volume customer communications, fraud analytics, and a multi-agent risk demo.
-
Downloadable resources including position papers and practitioner training materials.
Project¶
-
Changelog, maturity and validation, incidents the framework has been validated against, implementation guide, and the full references list.
-
About the discipline (what AI Runtime Security covers, and what it does not) and about the author.
Looking for the framework itself?
The framework lives under Framework in the top nav, split into Core Controls for single-agent deployments and MASO for multi-agent orchestration. This library sits alongside it as supporting material.
References