Committee on Industry, Research and Energy (ITRE)
92nd International session of the European Youth Parliament
MILANO 2021
fairness in AI
The term “fair AI” refers to probabilistic decision support that prevents disparate harm (or benefit) to different subgroups.
Objective of fair AI: mitigating bias and discrimination
Unfairness in AI: result of biases regarding data, modeling and inadequate application.
There are measures to prevent discrimination in AI, but more research needs to be done on AI fairness and transparency.
Bias can creep into AI system at any stage of development. For example:
- incomplete/inaccurate data leading to inaccurate results for underrepresented groups, reinforcing societal bias.
- poorly chosen objective functions in ML models, meaning data manipulations or misinterpretations in the system may lead to bias.
- Limited AI interpretability.
There is a need to ensure transparency in AI while not inhibiting technological development.
Opinion: Anna (LV) on Fairness
Opinion: Julia (PL) on Fairness in AI
click to play!
Achieving fairness often requires sacrificing other objectives (such as model accuracy);
Regulatory initiatives such as the GDPR enforce transparent algorithms, yet further research is required to reconcile transparent decision support with fair AI
Conflict Alert:
Even decision support systems without the knowledge of sensitive attributes are deemed unfair - non-sensitive attributes can act as proxies (e.g., salary is a proxy of gender, ZIP code is a proxy for ethnicity, family structure is proxy of race or religion).
Conflict Alert:
explainable AI
Alexander (DE) on AI Problems
Lucia (SK) on Accountability
Lucia (SK) on Accountability 2.0
Alexander (DE) on Explainable AI
one that produces details or reasons to make its functioning clear or easy to understand;
needed to build public confidence in disruptive technology, to promote safer practices, and to facilitate broader societal adoption;
can be benefitial in healthcare in a position of a "virtual assistant";
transparent: data and/or the algorithms are accessible;
Regulating transparency in AI:
possible high costs which could compromise advances brought by AI
BUT
ensuring fairness in AI
Conflict Alert:
Potential problems with implementation:
the accuracy or other important qualities get limited
reliability (might be even misleading)
increasing transparency might make a system more vulnerable to attacks
Conflict Alert:
data regulation
GDPR
General Data
Protection Regulation
gives you control on how your data is collected and used;
forces the companies to justify everything they do with it;
provides a guideline on what they can and cannot do with personal data;
gives more clarity over the kind of data being used and how companies will use it;
Elona (AL) on Data Regulation
Possible issues include:
data protection
lack of personal privacy
being tracked easily
losing all the data
being hacked easily
Conflict Alert:
Replacing human actors with autonomous agents throws legal system into disarray. This accountability gap causes problems in three areas: causality, justice, and compensation. (Also when an AI makes a decision that causes harm to a person).
Conflict Alert:
approaches to AI regulation
The white paper introduced EU's proposal for risk-based approach - binary distinction between the risk of AI applications. This approach would classify AI in 2 categories, high risk and non high risk. Regulations would only apply to high risk artificial intelligence.
To be deemed high risk, AI must fall under 2 classifications:
the AI application must be used in such a manner that significant risks are likely to arise;
AI application must be employed in a sector where significant risks can be expected to occur;
critiques by MS
The Commission proposes to treat remote biometric identification (such as the controversial facial recognition technologies) separately from other AI technologies.
The Commission hopes that over €20 billion in total, in both public and private investment, will be invested in AI per year over the next decade.
The Commission also plans to promote more networks and co-ordination in AI research and investment. It plans to facilitate the creation of “excellence and testing centres” , enabling Europe to provide world leading master’s programmes in AI.
Private: proposal to ensure that AI will be accessible for SME’s, particularly by supporting Digital Innovation Hubs;
Public: an “Adopt AI programme” to promote public sector AI procurement, which will particularly prioritise sectors such as healthcare;
Lily (IE) on Approaches to AI Regulations
Matviy (UA) on Conflicts on Approaches
The white paper's regulatory approach is very black and white.
many AI risks will be left entirely uncovered by the proposed new regulations, even though some of the five requirements could be sensibly applied to them.
Conflict Alert:
The proposed regulatory regime risks being too burdensome for business.
Some concepts need to be clarified
New initiatives are mentioned without possible funding for already existent organisations.
Potential issue with overregulation
Conflict Alert:
created by Marichka Nadverniuk (UA), MTM at Milano 2021 — 92nd International Session of the European Youth Parliament
accountability in AI
Kseniya (BY) on Accountability in AI
There are several programs proposing guidelines both factual and ethical when it comes to AI, such as the Asilomar principle, the Barcelona declaration, the EESC opinion and others.
Users also ask:
“black box” nature of AI, whereby it’s almost impossible to determine how or why an AI makes the decisions it does, as well as the complexities of creating an “unbiased” AI.
Given the speed of the implementation, it will get more and more complicated to find "the one to blame" in case of emergencies. With each year, the decisions are getting more and more complex and complicated to make.
Conflict Alert: