Back

Ensure the artificial intelligence system performs at an acceptable level of accuracy, robustness, and cybersecurity.


CONTROL ID
15024
CONTROL TYPE
Process or Activity
CLASSIFICATION
Preventive

SUPPORTING AND SUPPORTED CONTROLS




This Control directly supports the implied Control(s):
  • Establish, implement, and maintain an artificial intelligence system., CC ID: 14943

There are no implementation support Controls.


SELECTED AUTHORITY DOCUMENTS COMPLIED WITH




  • High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle. (Article 15 1., Proposal for a Regulation of The European Parliament and of The Council Laying Down Harmonized Rules On Artificial Intelligence (Artificial Intelligence Act) and Ameding Certain Union Legislative Acts)
  • changes in the environments to which the AI is exposed, the learning and actions, decisions and outputs of the AI system, as well as its impacts on stakeholders; (§ 4.3 ¶ 6 Bullet 4, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • AI systems can inadvertently circumvent existing governance controls. Management should ensure that AI systems explicitly comply with existing governance controls. (§ 5.2.3 ¶ 3 Bullet 4, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • extending compliance processes to account for the speed, scope or sophistication of the AI system (e.g. to increase the level or frequency of monitoring); (§ 6.6.2 ¶ 2 Bullet 1, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • the use of AI to monitor other AI systems and the extra monitoring or alerting that can be required. (§ 6.6.2 ¶ 2 Bullet 5, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • Decision-making capability. Decision-makers should be adequately skilled and trained for the decisions for which they are responsible. Controls should be implemented to ensure AI systems are adequate to the task they have been set. See ISO/IEC TR 24028. (§ 6.3 ¶ 6 Bullet 3, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the conditions under which the technology was developed are documented. (MEASURE 2.5, Artificial Intelligence Risk Management Framework, NIST AI 100-1)
  • The AI system is evaluated regularly for safety risks – as identified in the MAP function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and it can fail safely, particularly if made to operate beyond its knowledge limits. Sa… (MEASURE 2.6, Artificial Intelligence Risk Management Framework, NIST AI 100-1)
  • Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are informed by input from domain experts and relevant AI actors to validate whether the system is performing consistently as intended. Results are documented. (MEASURE 4.2, Artificial Intelligence Risk Management Framework, NIST AI 100-1)