The European Union has released a number of ethics guidelines for trustworthy development of artificial intelligence (AI).
Last year in June, the European Commission had set up a High-Level Expert Group on AI who have mutually written the new guidelines.
The aim of new guidelines is to promote AI that can be trusted by everyone. There are three components of trustworthy AI which will need to be met throughout the entire lifecycle on a system.
First, the AI system should comply with all the applicable laws and regulations.
Second, it should ensure adherence to ethical principles and values.
Third, it should be robust, both from a technical and social perspective (because an AI system can cause unintentional harm even when it is built with good intentions).
In short, an AI system should be lawful, ethical and robust.
The High-Level Expert Group on AI has detailed the seven key factors in a document, which every AI development individual or company will need to follow. Following is the brief of these factors:
- AI systems should support human agency and fundamental rights, and not reduce, limit, or misguide human autonomy.
- The algorithms used in the AI system should be secure, reliable, and robust enough to handle the errors during all the phases.
- Users should have full control over their data. The concerned data should not be sued to harm them.
- AI systems should exhibit transparency.
- The systems developed should consider all the ranges of human abilities, skills and requirements.
- AI systems should consider societal and environmental well-being, and contribute to bring social change and improve ecological responsibility.
- There should be mechanisms in place which are responsible and accountable for AI systems and their results.
The European Commission will launch a pilot phase in summer this year. Companies and public administrators can sign up to the European AI Alliance to get notified when the pilot begins.
In 2020, the expert group will evaluate the results and propose the next steps.