POLICY BRIEF  

15 05 2024

15

Arianna Rossi

AI literacy in the AI Act

BACKGROUND AND FIELD OF APPLICATION

The AI Act introduces the notion of AI literacy and the obligation for providers and deployers of AI systems to devise appropriate measures that ensure a sufficient level of understanding of the functioning, potentialities, limits and risks of AI. Even though AI is a long-standing field, most of the research on how to develop the literacy of non-experts has been published in the last few years and discussions on how to better foster it are ongoing, also because it must be funded on other kinds of competences, such as digital literacy.[1]

The risks of not ensuring the AI literacy of the providers and deployers can be many, for instance misinterpretation of results, inability to detect errors, misuse, incapacity to use[2] and overreliance[3] on AI systems. Especially when it comes to systems that support decision-making in domains where the impacts on individuals and society can be significant (e.g., healthcare, justice, education, etc.), the stakes are high.

[1] Duri Long and Brian Magerko, ‘What Is AI Literacy? Competencies and Design Considerations’, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (ACM 2020) 1 <https://dl.acm.org/doi/10.1145/3313831.3376727> accessed 7 March 2024.

[2] Ibid. 1

[3] Zana Buçinca, Maja Barbara Malaya and Krzysztof Z Gajos, ‘To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-Assisted Decision-Making’ (2021) 5 Proceedings of the ACM on Human-Computer Interaction 188:1, 188:2 <https://doi.org/10.1145/3449287> accessed 13 March 2024.

HIGHLIGHTS

  • AI literacy is defined as the “skills, knowledge and understanding that allows providers, users and affected persons […] to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it may cause.” (Article 3(56)).
  • The AI Act establishes that providers and deployers of AI systems should adopt measures that ensure a sufficient level of AI literacy among their staff and others that may use AI systems on their behalf (Article 4(b)).
  • AI literacy covers the whole AI development and deployment lifecycle and encompasses how to apply technical elements during its development, how to devise measures during its use, how to interpret the output, and for the affected people, how decisions that impact them are taken (Recital 9b, 20 and 91). Therefore, it concerns developers, deployers and affected people at large.
  • There are no-size-fits-all rules, on the contrary such measures should account for their technical knowledge, experience, education and training, as well as for the context of use of the AI system and the people on which the AI systems will be used.
  • AI literacy is particularly important for deployers in the case of high-risk AI systems, in particular for those in charge of implementing instructions and human oversight (Recital 58).
  • AI literacy measures can be developed within voluntary codes of conduct (Art. 95) and by the Commission with the support of the EU Artificial Intelligence Board (Art. 66(f)).

IMPACT ON PROJECT

Since AI literacy encompasses the understanding of the functioning of AI models, good practice entails that the decisions taken at the development stage are accurately documented for later use. This is especially important for high-risk AI systems such as medical devices, whose requirements concerning transparency, documentation, data governance and human oversight are based on information gathered or generated at the development stage, which can then be used to enable deployers to make an informed, correct use of the system.

Moreover, even though it can be assumed that the researchers working on the development of the AI models have a clear understanding of their operation, awareness on the possible risks and harms that the development and put into service can engender should also be ensured.  The need to promote sufficient levels of AI literacy in this respect may be understood as a researchers’ duty under the principles of reliability, honesty, respect and accountability of the European Code of Research Integrity and, whenever the AI systems may be foreseeably deployed on people, a good practice of scientific research conduct with human subjects5 based on the following four cornerstones: i) respect for the autonomy, privacy and dignity; ii) scientific integrity; iii) social responsibility and iv) maximize benefits and minimize harms.