READING THE DECLARATION
A DECLARATION FOR WHAT PURPOSE?
The Montreal Declaration for responsible AI development has three main objectives:
1. Develop an ethical framework for the development and deployment of AI;
2. Guide the digital transition so everyone beneﬁts from this technological revolution;
3. Open a national and international forum for discussion to collectively achieve equitable, inclusive, and ecologically sustainable AI development.
A DECLARATION OF WHAT?
The Declaration’s first objective consists of identifying the ethical principles and values that promote the fundamental interests of people and groups. These principles applied to the digital and artificial intelligence field remain general and
abstract. To read them correctly, it is important to keep the following points in mind:
Although they are presented as a list, there is no hierarchy. The last principle is not less important
than the first. However, it is possible, depending on the circumstances, to lend more weight to
one principle than another, or to consider one principle more relevant than another.
Although they are diverse, they must be interpreted consistently to prevent any conflict that could prevent them from being applied. As a general rule, the limits of one principle’s application are defined by another principle’s field of application.
Although they reﬂect the moral and political culture of the society in which they were developed, they provide the basis for an intercultural and international dialogue.
Although they can be interpreted in different ways, they cannot be interpreted in just any way. It is imperative that the interpretation be coherent.
Although these are ethical principles, they can be translated into political language and interpreted in legal fashion.
Recommendations were made based on these principles to establish guidelines for the digital transition within the Declaration’s ethical framework. It aims at covering a few key cross-sectorial themes to reﬂect on the transition towards a society in which AI helps promote the common good: algorithmic governance, digital literacy, digital inclusion of diversity and ecological sustainability.
A DECLARATION FOR WHOM?
The Montreal Declaration is addressed to any person, organization and company that wishes to take part in the responsible development of artiﬁcial intelligence, whether it’s to contribute scientiﬁcally or technologically, to develop social projects, to elaborate rules (regulations, codes) that apply to it, to be able to contest bad or unwise approaches, or to be able to alert public opinion when necessary.
It is also addressed to political representatives, whether elected or named, whose citizens expect them to take stock of developing social changes, quickly establish a framework allowing a digital transition that serves the greater good, and anticipate the serious risks presented by AI development.
A DECLARATION ACCORDING TO WHAT METHOD?
The Declaration was born from an inclusive deliberation process that initiates a dialogue between citizens, experts, public ofﬁcials, industry stakeholders, civil organizations and professional associations. The advantages of this approach are threefold:
1. Collectively mediate AI’s social and ethical controversies;
2. Improve the quality of reﬂection on responsible AI;
3. Strengthen the legitimacy of the proposals for responsible AI.
The elaboration of principles and recommendations is a co-construction work that involved a variety of participants in public spaces, in the boardrooms of professional organizations, around international expert round tables, in research ofﬁces, classrooms or online, always with the same rigor.
AFTER THE DECLARATION?
Because the Declaration concerns a technology which has been steadily progressing since the 1950s, and whose pace of major innovations increases in exponential fashion, it is essential to perceive the Declaration as an open guidance document, to be revised and adapted according to the evolution of knowledge and techniques, as well as user feedback on AI use in society. At the end of the Declaration’s elaboration process, we have reached the starting point for an open and inclusive conversation surrounding the future of humanity being served by artiﬁcial intelligence technologies.
For the first time in human history, it is possible to create autonomous systems capable of performing complex tasks of which natural intelligence alone was thought capable: processing large quantities of information, calculating and predicting, learning and adapting responses to changing situations, and recognizing and classifying objects. Given the immaterial nature of these tasks, and by analogy with human intelligence, we designate these wide-ranging systems under the general name of artificial intelligence. Artificial intelligence constitutes a major form of scientific and technological progress, which can generate considerable social benefits by improving living conditions and health, facilitating justice, creating wealth, bolstering public safety, and mitigating the impact of human activities on the environment and the climate. Intelligent machines are not limited to performing better calculations than human beings; they can also interact with sentient beings, keep them company and take care of them.
However, the development of artificial intelligence does pose major ethical challenges and social risks. Indeed, intelligent machines can restrict the choices of individuals and groups, lower living standards, disrupt the organization of labor and the job market, influence politics, clash with fundamental rights, exacerbate social and economic inequalities, and affect ecosystems, the climate and the environment. Although scientific progress, and living in a society, always carry a risk, it is up to the citizens to determine the moral and political ends that give meaning to the risks encountered in an uncertain world.
The lower the risks of its deployment, the greater the benefits of artificial intelligence will be. The first danger of artificial intelligence development consists in giving the illusion that we can master the future through calculations. Reducing society to a series of numbers and ruling it through algorithmic procedures is an old pipe dream that still drives human ambitions. But when it comes to human affairs, tomorrow rarely resembles today, and numbers cannot determine what has moral value, nor what is socially desirable.
The principles of the current declaration are like points on a moral compass that will help guide the development of artificial intelligence toward morally and socially desirable ends. They also offer an ethical framework that promotes internationally recognized human rights in the fields affected by the rollout of artificial intelligence. Taken as a whole, the principles articulated lay the foundation for cultivating social trust toward artificially intelligent systems.
The principles of the current declaration rest on the common belief that human beings seek to grow as social beings endowed with sensations, thoughts and feelings, and strive to fulfill their potential by freely exercising their emotional, moral and intellectual capacities. It is incumbent on the various public and private stakeholders and policymakers at the local, national and international level to ensure that the development and deployment of artificial intelligence are compatible with the protection of fundamental human capacities and goals, and contribute toward their fuller realization. With this goal in mind, one must interpret the proposed principles in a coherent manner, while taking into account the specific social, cultural, political and legal contexts of their application.