← Back to Insights
NIST AI RMF

Why AI Supply Chain Risk Is the Compliance Gap Most Teams Miss.

AuditPulse Intelligence • March 20266 min read

The Invisible Attack Surface

Most AI teams spend significant effort securing their own models and infrastructure. Far fewer apply the same rigour to the third-party components their AI systems depend on.

This is the AI supply chain problem. And it is one of the most commonly missed compliance gaps in our diagnostics.

What AI Supply Chain Risk Actually Means

Your AI system does not exist in isolation. It depends on:

  • Foundation models from third-party providers such as OpenAI, Anthropic, or Google
  • Training data sourced from external datasets or data brokers
  • MLOps infrastructure from cloud providers and specialist vendors
  • Vector databases and embedding services
  • Evaluation and monitoring tools

Each of these dependencies represents a risk vector. A compromise, failure, or policy change at any point in this chain can affect your AI system's behaviour, your data security posture, and your regulatory compliance.

What NIST AI RMF GOVERN 6.1 Requires

NIST AI RMF GOVERN 6.1 explicitly addresses AI supply chain risk. It requires organisations to:

Identify and document all third-party AI components and dependencies. Assess the risk profile of each vendor and component. Establish contractual protections around AI-specific obligations. Monitor third-party vendors for changes that could affect your risk posture.

This is not optional guidance. For organisations using NIST AI RMF as their governance framework - which is increasingly required by enterprise procurement teams - GOVERN 6.1 is a documented requirement.

What the EU AI Act Adds

The EU AI Act extends supply chain obligations under Article 28 which addresses obligations of deployers using third-party AI components. If you deploy a high-risk AI system that incorporates third-party models or data, you bear responsibility for ensuring those components meet the Act's requirements.

You cannot outsource compliance by outsourcing the model.

The Three Gaps We See Most Often

No vendor attestation process. Most teams have not asked their AI infrastructure vendors for security attestations, compliance documentation, or contractual AI-specific protections. They assume the vendor's SOC 2 certification covers AI-specific risks. It typically does not.

No dependency inventory. Teams cannot produce a complete list of the third-party AI components in their production stack. This makes risk assessment impossible and audit responses extremely difficult.

No change monitoring. Vendors update model weights, change data retention policies, and modify API behaviour without notification. Teams that do not monitor these changes may find their compliance posture affected by decisions made outside their control.

The Practical Fix

Building a defensible AI supply chain risk posture does not require months of work. The minimum viable approach:

Create a dependency register listing every third-party AI component in your production stack. For each entry document the vendor, the component, the data it accesses, and the compliance certifications the vendor holds.

Request vendor attestation documents from your primary AI infrastructure providers. Major providers including Anthropic, OpenAI, and the cloud platforms have security and compliance documentation available.

Add AI-specific clauses to vendor contracts covering data handling, model change notification, and incident response obligations.

Establish a quarterly review cadence to check for changes in vendor compliance posture.

This is the kind of documentation that enterprise procurement teams and regulators will ask for. Building it proactively takes a few days. Building it under pressure takes months.

Regulatory Exposure Is Hidden In Your Stack.

Identify critical compliance gaps in your AI architecture before enterprise procurement does.

Run Your Free Diagnostic