The 9 principles (in English)

1. Right to Privacy

AI tools must guarantee data security and safeguard the privacy of students, teachers and third parties involved in the education system. Data collection, especially in adaptive learning systems, should not lead to the unnecessary accumulation of personal information. Online monitoring software should only be used with clear safeguards against misuse in place. 

Key question: How does this AI tool prevent the collection of personal data that is not essential to the learning objectives of the educational institution, and does it provide users with the option to opt-out of data collection entirely?

2. Non-discrimination

AI must not cause or facilitate discrimination or hate speech. An algorithm trained on a dataset dominated by white men, for example, is likely to treat responses from people of other demographic groups as ‘less correct’. AI must be trained on diverse datasets.

Key question: What measures have been taken to ensure that this AI tool is free from biases based on gender, ethnicity, or other identity characteristics, and has there been an external audit for bias?

3. Transparency and Explainability

Users of AI tools should be able to understand, in clear terms, how and why a particular decision or recommendation is made by the application. This is especially important for fraud detection systems and adaptive learning tools. 

Key question: To what extent can the decision-making process of this AI application be explained, and are the models and data used transparent to both teachers and education personnel, students and parents?

4. Accountability

It should be clear who is responsible for the outcomes of AI systems in educational institutions. Schools need to know who is liable in the event of undesirable or discriminatory results.

Key question: Who is legally accountable for the outcomes generated by this AI tool? And how can schools or individuals challenge results gathered by AI?

5. Human Oversight and Autonomy

AI should never fully replace the decision-making authority of teachers. AI systems should be able to advise on lesson plans or grading, but it must always be possible for teachers and other staff at educational institutions to adjust or override AI-generated outcomes.

Key question: To what extent does this AI application Ensure that teachers retain ultimate control over its outcomes, and can teachers modify or disregard results generated by the AI?

6. Human Rights, Dignity and Accessibility

AI systems must be designed to respect fundamental human rights and must not infringe on human dignity. This means they should not be used for manipulation or to undermine individual freedom, particularly when it comes to vulnerable groups. These systems should be accessible to learners of all abilities, and they should account for diverse behaviours and needs. Lack of eye contact or fidgeting should, for example, not be marked as signifying a lack of interest. 

Key question: How does this AI application ensure that it is accessible to all, and that human dignity and rights are upheld?

7. Ensuring Equality in Use

AI tools should not exacerbate existing inequalities in access to technology. Schools as well as developers must be able to ensure that all students and personnel of an institution, regardless of socioeconomic background, geographic location, or ability, have equitable access to the AI tools that are adopted. For example, a digital learning tool that requires high-speed internet at home might disadvantage certain students. 

Key question: To what extent can it be guaranteed that all students and teachers have equitable access to this AI tool, including the necessary infrastructure and support?

8. Ecological impact

Educational tools should be evaluated for their ecological footprint, as the energy consumption of AI systems can vary greatly. While a basic adaptive learning platform may have minimal environmental impact, generative AI applications or large-scale data analysis use significantly more energy. Institutions should weigh the ‘environmental costs’ of such tools against their educational value. 

Key question: Are the environmental impacts of this AI application proportionate to the educational significance of the task it performs, and have measures been taken to limit the ecological footprint?

9. Training and Awareness

Institutions must be able to properly train teachers and other personnel on how to use AI tools effectively and ethically. This includes understanding the benefits, risks, and limitations of AI in education. For instance, teachers using adaptive learning platforms should know how to interpret AI-generated recommendations and adjust them based on their professional judgment. 

Key question: How much funding, expertise, resources, and time does the institution have to adequately train teachers and other personnel on the ethical, practical, and pedagogical implications of using this AI tool?

Download the 9 nine principles as pdf.

Loading...