Skip to content
IC Inline Code

Standards

The framework stack we deliver against.

We do not invent frameworks. We deliver against the standards your regulators, auditors, and board already recognise. Below is the stack used in every Inline Code engagement, and the optional layers we add for clients with offshore exposure.

Layer

AI risk taxonomy

Standard

NIST AI RMF 1.0 and Generative AI Profile (NIST AI 600-1)

Purpose

Organising backbone for all assessments and reports. Function-based structure: govern, map, measure, manage.

Jurisdiction

United States, internationally adopted

Layer

Management system

Standard

ISO/IEC 42001:2023

Purpose

Structure for ongoing AI management system implementation. The certifiable management standard for AI.

Jurisdiction

International (ISO)

Layer

Privacy

Standard

Australian Privacy Principles, Privacy Act 1988

Purpose

Data handling for AI inputs and outputs. APP 6, 8, and 11 are the heart of AI data governance.

Jurisdiction

Australia (OAIC)

Layer

Prudential, information security

Standard

APRA CPS 234

Purpose

Information security obligations for regulated financial entities. Mandatory.

Jurisdiction

Australia (APRA-regulated)

Layer

Prudential, operational risk

Standard

APRA CPS 230

Purpose

Operational risk management, including material service providers. AI tooling falls in scope.

Jurisdiction

Australia (APRA-regulated)

Layer

Application threat coverage

Standard

OWASP Top 10 for LLM Applications

Purpose

Technical control selection at the application layer. Prompt injection, sensitive disclosure, supply chain.

Jurisdiction

International (OWASP)

Layer

Adversarial threat modelling

Standard

MITRE ATLAS

Purpose

Attacker tactics and techniques targeting AI systems. The MITRE ATT&CK equivalent for ML.

Jurisdiction

International (MITRE)

Layer

Foundational endpoint and identity

Standard

ASD Essential Eight

Purpose

Baseline controls where AI deployment touches identity, endpoint, or admin tooling.

Jurisdiction

Australia (ASD)

Optional layers

Add-on frameworks for offshore exposure.

For clients with users, employees, or operations outside Australia, additional regimes are layered on top of the core stack.

Risk classification

EU AI Act

Risk classification of AI systems. Applies to organisations with EU-resident users or operations.

Consumer protection

Colorado AI Act

Consequential decision systems. Applies to organisations with Colorado-resident users.

Employment AI

NYC Local Law 144

Automated employment decision tools. Applies to organisations with New York City employees.

Service control attestation

SOC 2 Type II

Service organisation controls for AI providers and AI-enabled SaaS. Increasingly expected by enterprise buyers.

Where formal legal review is required (for example contracts, regulator correspondence, statutory interpretation), we flag this and continue producing the technical artifact in parallel. We do not provide legal advice.

Get started

Bring AI risk under board oversight in two weeks.

A thirty-minute discovery call costs nothing. We confirm fit, scope, and timing, then issue a fixed-fee statement of work within two business days.