Ethics of AI
March 16, 2021
As technology advances make artificial intelligence or AI all around us and it's closer to our daily lives than we think. Algorithms are increasingly influencing and guiding the world around us, ethical issues related to privacy, bias in algorithms, human-machine interactions, laws and practices on the development and exploitation of AI, as well as the often discussed issue of robots taking over the world and destroying humanity, is a matter of much debate. This is because at this time there is no law or a code of conduct that can clearly define the value or responsibility associated with AI or development.
On 8 April 2019, the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. This followed the publication of the guidelines' first draft in December 2018 on which more than 500 comments were received through an open consultation.
According to the Guidelines, trustworthy AI should be
1. lawful - respecting all applicable laws and regulations
2. ethical - respecting ethical principles and values
3. robust - both from a technical perspective while taking into account its social environment
The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements: Human agency and oversight, Technical Robustness and safety, Privacy and data governance, Transparency, Diversity, Non-discrimination and fairness, Societal and environmental well-being and Accountability. Tech giants like Microsoft or Google have also announced the ethical framework for the use of AI within their own organizations.
1. Increasing cooperation between public, non-governmental and international organizations to formulate a common framework for AI invention and development ethics and standards to protect human rights, safety and security.
2. The rise of new businesses or organizations that monitor AI operations.