EU's 3 Pillars & 7 Requirements for Trustworthy AI

Build trustworthy AI that's lawful (comply with laws), ethical (uphold values), robust (technical/social resilience); verify via 7 key requirements and ALTAI checklist for developers.

Core Pillars of Trustworthy AI

Trustworthy AI requires three interdependent properties: lawful (full compliance with applicable laws and regulations), ethical (alignment with principles like human agency, privacy, and societal well-being), and robust (technical reliability including accuracy, safety, and resilience, plus adaptation to social/technical environments). These ensure AI systems deliver benefits without unintended harm, developed by the High-Level Expert Group on AI (AI HLEG) after a December 2018 draft drew over 500 public comments, finalized April 8, 2019.

7 Key Requirements and Verification Process

AI systems must satisfy 7 specific requirements to be trustworthy, operationalized through a dedicated assessment list for practical verification. This list guides implementation across the AI lifecycle. A companion Definition of Artificial Intelligence clarifies scope for guideline application. The process included stakeholder piloting from June 26 to December 1, 2019, incorporating feedback to refine usability for real-world checks.

ALTAI: Actionable Checklist for Builders

The piloted assessment evolved into ALTAI (Assessment List for Trustworthy AI), released July 2020 as a self-assessment tool translating guidelines into practice. Developers and deployers use this dynamic checklist—available as a web prototype and PDF—to systematically address requirements, mitigating risks like bias or failure in production. Applying ALTAI upfront prevents costly rework and builds user trust in AI-powered products.

Summarized by x-ai/grok-4.1-fast via openrouter

5006 input / 2775 output tokens in 14170ms

© 2026 Edge