Establishing the rules for building a trustworthy AI


Robot and AI ethics is becoming a hotter and hotter topic--particularly affecting the surgical domain. The EU has been working on a robot ethics doctrine, and also the IEEE is drafting relevant standards within the frames of the Global Initiative, namely the IEEE P7000 - Model Process for Addressing Ethical Concerns During System Design, IEEE P7001 - Transparency of Autonomous Systems and IEEE P7007 - Ontological Standard for Ethically Driven Robotics and Automation Systems. The primary tool to achieve a unified regulatory environment is through implementing international standards. Safety and performance benchmarks should get adapted to the current state-of-the-art of the technology, especially in the medical domain, where standards are typically emerging as compulsory directives
The UN has also published a report on the AI boom earlier, and it has drafted a resolution how AI should assist to solve the SDG Grand Challenges.
On 9 June 2019, the G20 adopted human-centred AI Principles that draw from the OECD AI Principles.

While much of these will only have a real impact on our field later in time, it is our common interest and responsibility to follow the best practices and propositions.  


"Seven essentials for achieving trustworthy AI have been articulated
Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements;
specific assessment lists aim to help verify the application of each of the key requirements:
  1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy
  2. Robustness and safety: trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems
  3. Privacy and data governance: citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them
  4. Transparency: the traceability of AI systems should be ensured 
  5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility
  6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility
  7. Accountability: mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes
European Commission, ref. IP/19/1893

Comments

Popular Posts