Back

Establish, implement, and maintain an artificial intelligence system.


CONTROL ID
14943
CONTROL TYPE
Systems Design, Build, and Implementation
CLASSIFICATION
Preventive

SUPPORTING AND SUPPORTED CONTROLS




This Control directly supports the implied Control(s):
  • Operational management, CC ID: 00805

This Control has the following implementation support Control(s):
  • Refrain from notifying users when images, videos, or audio have been artificially generated or manipulated if use of the artificial intelligence system is authorized by law., CC ID: 15051
  • Establish, implement, and maintain a post-market monitoring system., CC ID: 15050
  • Include mitigation measures to address biased output during the development of artificial intelligence systems., CC ID: 15047
  • Limit artificial intelligence systems authorizations to the time period until conformity assessment procedures are complete., CC ID: 15043
  • Terminate authorizations for artificial intelligence systems when conformity assessment procedures are complete., CC ID: 15042
  • Authorize artificial intelligence systems to be put into service for exceptional reasons while conformity assessment procedures are being conducted., CC ID: 15039
  • Assess the trustworthiness of artificial intelligence systems., CC ID: 16319
  • Authorize artificial intelligence systems to be placed on the market for exceptional reasons while conformity assessment procedures are being conducted., CC ID: 15037
  • Withdraw authorizations that are unjustified., CC ID: 15035
  • Ensure the transport conditions for artificial intelligence systems refrain from compromising compliance., CC ID: 15031
  • Ensure the storage conditions for artificial intelligence systems refrain from compromising compliance., CC ID: 15030
  • Prohibit artificial intelligence systems from being placed on the market when it is not in compliance with the requirements., CC ID: 15029
  • Ensure the artificial intelligence system performs at an acceptable level of accuracy, robustness, and cybersecurity., CC ID: 15024
  • Implement an acceptable level of accuracy, robustness, and cybersecurity in the development of artificial intelligence systems., CC ID: 15022
  • Take into account the nature of the situation when determining the possibility of using 'real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement., CC ID: 15020
  • Notify users when images, videos, or audio on the artificial intelligence system has been artificially generated or manipulated., CC ID: 15019
  • Refrain from notifying users of artificial intelligence systems using biometric categorization for law enforcement., CC ID: 15017
  • Use a 'real-time' remote biometric identification system for law enforcement in publicly accessible spaces absent authorization under defined conditions., CC ID: 15016
  • Notify users when they are using an emotion recognition system or biometric categorization system., CC ID: 15015
  • Receive prior authorization for the use of a 'real-time' remote biometric identification system by law enforcement in publicly accessible spaces., CC ID: 15014
  • Prohibit the use of artificial intelligence systems that deploy subliminal techniques., CC ID: 15013
  • Prohibit artificial intelligence systems that deploys subliminal techniques from being placed on the market., CC ID: 15012
  • Prohibit the use of artificial intelligence systems that use social scores for unfavorable treatment., CC ID: 15011
  • Prohibit artificial intelligence systems that use social scores for unfavorable treatment from being placed on the market., CC ID: 15010
  • Prohibit the use of artificial intelligence systems that evaluate or classify the trustworthiness of individuals., CC ID: 15009
  • Prohibit artificial intelligence systems that evaluate or classify the trustworthiness of individuals from being placed on the market., CC ID: 15008
  • Prohibit the use of artificial intelligence systems that exploits the vulnerabilities of a specific group of persons., CC ID: 15007
  • Prohibit artificial intelligence systems that exploits vulnerabilities of a specific group of persons from being placed on the market., CC ID: 15006
  • Refrain from making a decision based on system output unless verified by at least two natural persons., CC ID: 15004
  • Establish, implement, and maintain human oversight over artificial intelligence systems., CC ID: 15003
  • Enable users to interpret the artificial intelligence system's output and use., CC ID: 15002
  • Develop artificial intelligence systems involving the training of models with data sets that meet the quality criteria., CC ID: 14996
  • Withdraw the technical documentation assessment certificate when the artificial intelligence system is not in compliance with requirements., CC ID: 15099
  • Define a high-risk artificial intelligence system., CC ID: 14959
  • Take into account the consequences for the rights and freedoms of persons when using ‘real-time’ remote biometric identification systems for law enforcement., CC ID: 14957
  • Allow the use of 'real-time' remote biometric identification systems for law enforcement under defined conditions., CC ID: 14955
  • Prohibit the use of 'real-time' remote biometric identification systems for law enforcement., CC ID: 14953
  • Prohibit the use of artificial intelligence systems under defined conditions., CC ID: 14951


SELECTED AUTHORITY DOCUMENTS COMPLIED WITH




  • The governing body's accountability should be established across all aspects of intended or actual use of AI and in a manner that is sufficient to ensure the intended outcomes, notably: (§ 4.3 ¶ 6, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • As Figure 1 shows, the AI ecosystem is broad and includes a spectrum of different technologies. A more detailed version of this figure is available in ISO/IEC 22989:—, Figure 6, where further details on ML elements and computational resources are described. If AI techniques are used, some of the o… (§ 5.3 ¶ 2, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • The speed of technological innovation and ever-changing legal requirements should encourage the organization to actively maintain a set of principles for its use of AI and ensure that they remain appropriate for the organization's use of AI. (§ 6.1 ¶ 9, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • Align the use of AI to the organization's culture and values. Decisions proposed by an AI system should take into account organizational policies, expectations (including impact of use) and ethics. (§ 5.5 ¶ 1 Bullet 6, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • the extent of the use of AI by the organization; (§ 6.6.1 ¶ 4 Bullet 3, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • Ensure that problem solving takes due account of context. An organization needs to ensure that contextual elements, essential to understanding behaviour, values and culture, are not missing, or omitted from the data that it is using to solve problems. (§ 5.5 ¶ 1 Bullet 7, ISO/IEC 38507:2022, Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations)
  • The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders). (MAP 2.1, Artificial Intelligence Risk Management Framework, NIST AI 100-1)
  • The organization's mission and relevant goals for AI technology are understood and documented. (MAP 1.3, Artificial Intelligence Risk Management Framework, NIST AI 100-1)
  • Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors. (MANAGE 4.2, Artificial Intelligence Risk Management Framework, NIST AI 100-1)