Japan’s National Cybersecurity Office (NCO, formerly NISC) has joined the United States, the United Kingdom, Australia, Canada, and other partner agencies in endorsing international guidance on supply chain risks in artificial intelligence (AI) and machine learning. The move signals that AI security is no longer viewed as a narrow technical problem for individual vendors or governments, but as a shared international challenge requiring coordinated standards and oversight. The guidance emphasizes that AI systems depend on a far more complex supply chain than conventional software, because they are built not only on code and infrastructure, but also on data, pretrained models, training environments, and third-party services. As organizations adopt AI to improve efficiency and decision-making, their exposure to vulnerabilities across this chain grows, raising the risk that attackers could undermine the confidentiality, integrity, or availability of critical systems.
A central message of the guidance is that organizations must manage AI supply chain risk across the full product lifecycle rather than treating it as a one-time procurement issue. To do so, they should identify all relevant suppliers and components, demand visibility through tools such as software bills of materials (SBOMs) and AI bills of materials (AIBOMs), and update their governance frameworks to reflect AI-specific attack surfaces. Continuous assessment, threat modeling, vulnerability mapping, and dedicated incident response planning are presented as essential practices. The guidance also stresses the importance of due diligence when selecting AI vendors, early clarification of contractual responsibilities under shared-responsibility models, and careful review of how vendors access, use, store, and possibly transfer organizational data. Internal preparation matters as well: staff involved in supply chain management should receive AI-focused security training, and organizations should establish clear communication channels for reporting and responding to emerging threats.
The guidance further breaks down the most important risks into five major components: data, machine learning models, AI systems, infrastructure and hardware, and third parties. Data-related threats include low-quality or biased datasets, data poisoning, and leakage of sensitive training information; suggested countermeasures include quarantining external data, sanitizing inputs, tracking data provenance, benchmarking across datasets and models, and applying privacy-preserving techniques. Model-related risks include malicious code embedded during serialization, model poisoning, and malware hidden in weights or metadata, which can be mitigated through safer file formats, trusted sourcing, reproducible builds, adversarial training, pruning, and ongoing monitoring for drift and abnormal behavior. For AI systems and infrastructure, the guidance recommends integrity checks and digital signatures, secure deployment based on the principle of least privilege, signed drivers, verified boot, and network segmentation. In the area of third-party risk, it calls for rigorous assessment, continuous monitoring, and contract terms that clearly limit data use, specify storage locations, and preserve audit rights. Overall, the guidance frames supply chain transparency as a foundational requirement for trustworthy AI governance.
