Skip to content

About the Author

Jonathan Gill

Jonathan Gill is a cybersecurity practitioner with over 30 years in information technology and 20+ years in enterprise security, now focused on the security challenges of generative and agentic AI in regulated environments.

He develops threat models, risk frameworks, and practical mitigations for AI systems, with particular emphasis on runtime behavioral risks, loss-of-control scenarios, and oversight gaps in autonomous AI deployments.

Current Focus

AI security governance at enterprise scale: designing regulator-ready AI security control frameworks, assessing generative and agentic AI solutions, and defining security guardrails for cloud-native AI platforms. Translating complex technical risk into actionable guidance for engineering teams, regulators, and executive leadership.

He contributes to the AI Runtime Security (AIRS) discipline, the practice of identifying, assessing, and treating threats to AI system behaviour in production environments. The AIRS Framework is a reference architecture for this discipline: practical, open-source, and built for regulated enterprises that need more than pre-deployment evaluation.

Expertise

  • Agentic AI risk modelling: threat models for multi-agent systems, long-horizon agentic behavior, orchestrator compromise, tool-use escalation, and loss-of-control scenarios
  • AI security controls design: three-layer architecture (guardrails, Model-as-Judge, human oversight) with defined failure modes and escalation paths
  • Multi-Agent Security Operations (MASO): identity, execution control, observability, privileged agent governance, and emergent risk in autonomous agent systems
  • Threat-driven security assessment: penetration testing across large enterprise portfolios, aligned to realistic adversary behavior
  • Regulatory and standards alignment: ISO 42001, ISO 27001, NIST AI RMF, NIST CSF 2.0, NIST SP 800-218A, EU AI Act, OWASP LLM Top 10
  • Cloud and platform security: architecture review across AWS, Azure, and Databricks, including AI platform-specific security patterns

Career Path

Over 30 years across infrastructure, security engineering, consulting, and leadership, progressing from systems administration through to principal-level cybersecurity roles.

Period Role Context
2025 – present Principal Cybersecurity Officer, AI, Cloud & Platform Cyber Risk Major financial institution
2022 – 2025 Head of Cybersecurity Consulting and Penetration Testing Major financial institution
2013 – 2022 Lead Security Consultant / Cloud Architecture Forum Chair Major financial institution
2007 – 2010 Business Information Security Officer / Solutions Architect Citi (global banking)
2001 – 2007 Lead Security Engineer Egg (one of the UK's first fully online banks)
1999 – 2001 Network Manager Botswana Telecommunications Corporation (built a national ISP from scratch)
1992 – 1998 UNIX & Applications Systems Administrator Diplomatic IT service

Education & Certifications

BSc (Hons) Open, Information Technology, The Open University (2003–2007)

  • CISSP (Certified Information Systems Security Professional)
  • CCSP (Certified Cloud Security Professional)
  • Microsoft Certified: Azure Fundamentals
  • AWS: Generative AI Applications with Amazon Bedrock
  • Practical Introduction to Quantum-Safe Cryptography

Connect

Feedback

Comments, thoughts, and constructive criticism are welcome. If you have feedback on the framework, the site, or anything else, please get in touch.

Email: feedback@airuntimesecurity.io

AI Assistance Disclosure

This framework was written with AI assistance (Claude and ChatGPT) for drafting, structuring, and research synthesis. Architecture, control design, risk analysis, and editorial judgment are the author's.

This is a personal project. It is not affiliated with, endorsed by, or representative of any employer, organisation, or other entity.