Know and Manage Your AI Risks
We assist your organization with Risk Analysis and Management,
to protect and monitor your organization AI-regulation.
Workshops and courses, information as practical preparation for regulation.
Review and development of processes that are in line with
the regulation.
Choice of AI methods that support
AI systems that are in line with the regulation.
EU has one of the most mature markets for digital services and products including application of AI-based services. Delivering digital services within or into the EU is regulated, both in the individual countries but also increasingly on an EU level.
In general, most regulation revolves around protecting the EU citizens, whilst allowing companies to deliver value by creating services or products for both citizens and companies.
There are two core regulations to take into consideration; any company supplying services in the EU must comply with the General Data Protection Regulation (GDPR) and shortly companies must also comply with the Harmonized Rules on Artificial Intelligence (AI). In addition, there might be other regulations depending on the nature of the services offered.
The ability to comply with the two core regulations mentioned above and the additional ones are affected by three main areas.
By not complying with the required regulations might result in a ban on the service and/or a significant fine.
All three areas can either be integrated in the existing organization and its delivery capabilities or be added
as an extra layer on top of the existing organization and its product portfolio.
It is an advantage to choose the correct technological approach to begin with by applying the correct technology or method to the correct problem to ensure that compliance can be met by the least amount of additional effort. Whereas, choosing a technological solution that might not supply the correct data and transparency can potentially shut down any effort to supply services or products in the EU.
The process under which a product or service is developed and delivered is also regulated. Thus, making sure that your company is having the correct process, in conjunction with the correct technological solution becomes paramount.
Everything a company does is eventually under some legal regime. Competent and correct legal counsel, in conjunction with the correct choice of technology and organization, is the most important effort any company can put into development and delivery of digital services and producing in the EU.
What to expect and how to move forward and comply with the EU regulations.
Europe fit for the Digital Age
Commission proposes new rules and actions for excellence and trust in Artificial Intelligence.
The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI.
They follow a risk-based approach:
High-risk: AI systems identified as high-risk include AI technology used
- Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
- Educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams);
- Safety components of products (e.g. AI application in robot-assisted surgery);
- Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
- Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
- Law enforcement that may interfere with people's fundamental rights (e.g. evaluation
of the reliability of evidence);
- Migration, asylum and border control management
(e.g. verification of authenticity of travel documents);
- Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
High-risk AI systems will be subject to strict obligations before they can be put on the market:
- Adequate risk assessment and mitigation systems;
- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
- Logging of activity to ensure traceability of results;
- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
- Clear and adequate information to the user;
- Appropriate human oversight measures to minimise risk;
- High level of robustness, security and accuracy.
In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.