Ethics methodology : The Public Ethics Of Artificial Intelligence

By Marc-Antoine Dilhac

Assistant Professor, Department of Philosophy, Université de Montréal 

Chair of the Ethics and Politics Group, Centre de recherche en éthique 

Canada Research Chair in Public Ethics and Political Theory


Coconstruction workshops methodology

By Christophe Abrassart

Assistant Professor at the École de design of Université de Montréal, and codirector of the DESS in strategic eco-design, Polytechnique Montréal

Associate professor with CIRAIG, Polytechnique Montréal

Researcher with the Design et Société groupe, École de design, UdeM, and with the Centre de recherche en éthique

Associate researcher with the Mosaic group, HEC Montréal



By Marc-Antoine Dilhac, Assistant Professor, Department of Philosophy, Université de Montréal


The Montreal Declaration for a Responsible Development of Artificial Intelligence is a collective creation that aims to put artificial intelligence development in the service of everyone, and to guide social change by formulating recommendations with a strong democratic legitimacy. The method selected is citizen coconstruction with different phases of development. This coconstruction rests on a declaration of general ethical principles that articulate fundamental values: well-being, autonomy, justice, private life, knowledge, democracy, responsibility. The initial work of determining these values and principles helps launch a participation process that will specify the ethical principles for responsible AI development and the recommendations to implement to ensure that AI promotes the fundamental interests of people and groups.

Taking AI Ethical Dilemmas Seriously

The artificial intelligence revolution, especially deep learning, opens innovative technological development perspectives that will help improve decision-making, reduce certain risks and offer assistance to those who are most vulnerable. But the hope that artificial intelligence will bring social progress is shadowed with fear: placed in the wrong hands, artificial intelligence could become a weapon of world domination (private life control, concentration of wealth, new discrimination).

Many people have expressed doubts around the goals that drive researchers, developers, entrepreneurs and political representatives. If the progress of artificial intelligence brings wonder and amazement, it also awakens fears of seeing human relationships weaken through the use of robots in the fields of medical attention, elderly care, legal representation or even education. Reactions to the development of artificial intelligence can even be hostile, namely when it used for increased control of individuals and society, a loss of independence and the curtailing of public liberties.  

The development of artificial intelligence and its applications calls into play fundamental moral values that can cause conflict and serious ethical dilemmas, as well as deep social and political controversies: should we insist on public security by increasing the means of intelligent surveillance (facial recognition, anticipating violent behaviour), at the cost of individual freedoms? Can objectively improving the well-being of individuals, namely by encouraging people to adopt behaviours normalized by smart tools (eating behaviour, workload management, organizing your day) be done without respecting their independence? Should economic performance prevail over the concern for a fair distribution of the advantages of the artificial intelligence market?  

These are just some of the dilemmas that oppose fundamental values and interests, and that cannot be solved with a simple hierarchy of values. In other words, it’s not about choosing between values and favouring some while excluding others (security over freedom, efficiency without social justice). It is also foolish to hope that single solutions can be devised in definite fashion. It is therefore important to take ethical dilemmas caused by the development of artificial intelligence seriously, and build a political, ethical and legal framework to allow us to face them while respecting the different fundamental values we hold dear as members of a democratic society.

Give back to democracy the ability to provide a framework for responsible AI

The development of artificial intelligence and its uses involves a great number of players and professions (researchers, developers, lawyers, police, industrials, political representatives, etc.) that are driven by common ideals such as a concern for justice, well-being and freedom. But these players have other potentially conflicting goals and interests that go against the interests of the community and the users of artificial intelligence, such as seeking financial gain and shares of the market, the temptation of economic monopoly and control of society.


The question we, as a community, must ask ourselves is not whether artificial intelligence is good or bad - you might as well ask whether the invention of the wheel is good or bad: it all comes down to use. The relevant question is rather to ask what type of social and political arrangement would allow us to get the most benefits from the development of artificial intelligence in an equitable fashion and make the best use of smart devices to increase our autonomy, be it driverless cars, screening and surveillance equipment, helper robots and teacher robots.

As members of a democratic society, our main concern is protecting the capacity for groups and individuals to make choices that make sense for them, increasing their capacity to act and shape their environment, and prevent the domination exercised by those who exploit our common vulnerability. If these goals are commonly agreed upon in democratic society, their application to guide the development of artificial intelligence is already creating controversy. It’s not apparent at first glance that the use of self-driving cars will increase the autonomy of their users, or of those who would choose not to use them.

To solve the many concrete questions posited by the use of smart devices and make sure that artificial intelligence is developed “in intelligent fashion” with democracy, it is necessary to employ extra democratic efforts and involve the greatest number of citizens in a thinking session around the social issues of artificial intelligence. The goal is not only to find out what the individuals intuitively think about such and such innovation and poll their raw, uninformed preferences. The goal of a coconstruction approach is to open a democratic discussion on the way in which we must organize society to ensure responsible use of artificial intelligence.

This is the meaning of the Montreal Declaration initiative: give back to democracy the ability to decide moral and political questions that concern society as a whole. It’s not only a question of reflecting on democratic issues, but also offering credible answers to pressing questions and formulating political and legal recommendations with a strong democratic legitimacy.

Citizen Involvement

From the moment you involve the public in a consultation and participation process (coconstruction) on controversial social questions, you must ensure that the process avoids the risks usually associated with a democratic exercise. And yet, two objections are traditionally brought up to disqualify public involvement:

  1. Ignorance: according to this objection, which is the most common, the public is ignorant and does not possess the ability to understand complex issues that require scientific knowledge, mastery of logical forms of argument and knowledge of political and legal processes.

  2. Populism: according to this objection, which is a frightening one, the involvement of the unqualified public can be an opportunity for the demagogic manipulation that stokes popular stereotypes and can lead to the passing of unreasonable propositions, hostile to social progress, or even tyrannical towards minorities. 


We do not share the belief that the public is so ignorant that they must not be consulted. We do not subscribe to the idea that non-expert members of our society have unsurmountable prejudices and their alleged irrationality leads them towards systematic errors. Ignorance is certainly an important problem, but we believe instead that they can shed light on neglected aspects of social controversies, because they are concerned by the issues discussed, and they can contribute to significant solutions that experts haven’t thought of, or were unable to support publicly.  

If, for certain individuals, prejudices and a tendency towards irrationality cannot be completely eliminated, it is possible to overcome these biases collectively. In favourable conditions, non-expert individuals can take part in complex debates surrounding social problems, such as those presented today by artificial intelligence research and its industrial applications. Experts in various matters relevant to citizen involvement on artificial intelligence can help implement these favourable conditions.  

We have identified 4 conditions necessary for the coconstruction process: epistemic diversity, access to relevant information, moderation and iteration.

a. Epistemic diversity

We must first ensure that the deliberating groups are as diverse as possible, in terms of social environment, sex, generation or ethnic origin. This diversity is not only required by the idea we have of an inclusive democracy, but is also necessary to increase the epistemic quality of the debates. This simply means that every person brings a different perspective to the subject being debated, and thus enriches the discussion.

b. Access to relevant information

But we know, however, that epistemic diversity is not enough and that if the participants have no skills or knowledge in the field being discussed, they cannot produce new knowledge, or find their way in the discussion. They are then collectively susceptible to amplify individual mistakes. We must therefore prepare the participants by providing relevant, quality information, both accessible and reliable. The deliberations must therefore be preceded by an information phase. 

c. Moderation

Other than having quality information, it is necessary that the participants reason freely, which is to say without being impeded by cognitive biases. We define cognitive bias as a distortion of rational thought by intuitive mechanisms. One of the most common, and most problematic in a deliberation is the confirmation bias: we have a tendance to only accept opinions that confirm our own beliefs, and to reject those that go against what we already believe. There are dozens of cognitive biases that can deform the logic course of our reflexions.


But there are also biases that apply to the deliberation itself, such as the tendency to adopt more and more radical positions: if the group that is deliberating is initially distrustful of artificial intelligence innovations, it is quite probable that they will be entirely hostile towards them at the end of the deliberation process. To avoid this type of knee-jerk reaction, we believe it’s important to ensure epistemic diversity in the deliberating group and to put a moderation body in place. 


This does not necessarily have to take the shape of a personal intervention from a moderator. Although we don’t reject personal moderation, we believe we can overcome deliberation biases through other means, such as introducing unexpected events in the scenarios that sparked the discussions.

d. Iteration

Ideally, we should be able to bring together the population as a whole to take part in a reflexion on the responsible development of artificial intelligence. But the conditions we just described cannot be applied to very large groups, let alone a society of many millions of people. It is therefore important to conduct citizen involvement in smaller groups and increase the number of meetings. This is the iteration phase of coconstruction.

The reasons to proceed this way are technical, but easily understood. The mathematician and player in the French Revolution, the marquis de Condorcet, had demonstrated that the judgment of groups is always right more often than each person individually, and that this increases as the group grows larger. For this to be the case, however, two conditions must be met: the individuals in the group must have more than a fifty/fifty (50/50) chance of being right, and they must not communicate with one another (Condorcet rightly feared the risks of manipulation).

Yet we cannot ensure that for very large groups that the individuals have the required skills and that each individual has more than a fifty-fifty chance of having an adequate opinion. Allowing deliberation ((communication between one another) is one way to increase the skill of the participants, as long as it is in the framework we are suggesting. Of course, that does not satisfy Condorcet’s second condition, but it does guarantee the first. And to increase the quality of opinions, it is necessary to multiply the deliberating groups: since we cannot increase the size of the group, we must increase the number of participants by proceeding with an iteration of participation sessions.

For all of these reasons, we have selected the structure of a coconstruction workshop that brings together non-expert citizens, experts, stakeholders (associations, unions, professional representatives, businesses), as well as political players. These workshops are organized in different formats adapted to the deliberation spaces and satisfy the conditions for fruitful, solid citizen involvement. This is how we ward off the spectres of ignorance and populism.

The Role of the Experts

The experts’ role is not to solve the ethical dilemmas brought on by artificial intelligence themselves, nor become legislators. It is important to remember that ethical dilemmas will not have one single, simple solution; otherwise they wouldn’t be true dilemmas. Sometimes ethicists can give off the impression of looking to preach, of knowing the answers to the touchy questions that the public is asking themselves, and even of being able to preemptively solve tomorrow’s problems.

We must also add that even when we are not in the presence of a dilemma, strictly speaking, public ethics questions cannot be solved without making choices that favour certain moral interests over other moral interests, without neglecting them. This is the result of value pluralism which defines the moral and political context of modern democratic societies. It is therefore possible to favour well-being by challenging the priority of consent: think of a medical app that could access personal data without our consent, but that would help better heal serious diseases thanks to the data.

This type of ethical and social choice should be in the hands of all members of our democratic society, and not just a part, a minority, even if they are experts. Ethicists play three modest but crucial roles:

  • They must ensure that favourable conditions are in place for citizen involvement;

  • They must clarify the ethical issues that underlie the controversies surrounding artificial intelligence;

  • They must rationalize the arguments being defended by the participants by showing them the arguments we know to be wrong or biased and explaining the reasons why they are wrong.


The role of ethicists is therefore that of informed guidance. Experts in other research fields (computer sciences, health, safety, etc.) also play a guidance role by providing participants with the most useful and reliable information regarding the object of controversy (How does an algorithm that learns to make a diagnosis work? Can a doctor be replaced by a robot programmed for the diagnosis? What are the protections we can put into place to defend against attempts to hack our medical data? etc.)

We therefore have no intention of reasoning in lieu of the citizens to build an ethical and legal framework that we can then propose to the public to be validated. It is possible that public ethics questions regarding AI are solved differently from one democratic society to another. But we can also hope for a converging of the proposed solutions. The Montreal Declaration intends to submit this ethical framework for a reflexion at the international level.

We should, by the end of the process, reach some sort of thought-out balance between the ethical principles that will serve as the basis for the deliberation (the 7 principles of the initial Declaration) and the political and ethical recommendations that will be developed throughout the participation process, up until the final summary.

From coconstruction to social action

One of the goals of the coconstruction process of the Montreal Declaration is to hone the ethical principles that we suggest in the preliminary version of the Declaration. The most important goal remains developing recommendations to provide a framework for artificial intelligence research and its technological and industrial developments. However, it is all too common to see analysis and recommendation reports forgotten as soon as they are published: the act of social transformation requires more collective energy that is often lost in the participation phase. It is therefore crucial not to lose the public enthusiasm that will be shown in the coconstruction period.

Once the coconstruction period has been completed, a period of hosting a public debate begins, one which must be held in the spaces where political, legal and professional decisions are made. In order to engage this debate, we must offer a declaration that is as accessible as possible, that holds a short list of targeted recommendations that receive the most support from participants. These recommendations are not only legal, and when they are, they do not necessarily require changing the law. But they can, and sometimes in certain fields must, call for a modification to the legal framework. In other cases, the recommendations will help fuel and guide the reflexions of professional organizations so they modify their code of ethics or code of conduct, or adopt a new code of ethics. 

This step is therefore the final step in the coconstruction process. We must, however, specify that, faced with a technology that has never cessed to progress for over 70 years, and whose major innovations now occur every 5 years, on average, it would be unreasonable to present the Montreal Declaration for a Responsible Development of Artificial Intelligence as a definitive, complete document. We believe it is essential to think of coconstruction as an open process, with successive and cyclical phases of deliberation, participation and producing recommendations, and to think of the Declaration itself as a guidance document that can be reviewed and adapted according to the evolution of artificial intelligence.

Ethics supporting a coconstruction approach

The coconstruction approach is supported by the expertise developed at the Centre de Recherche en éthique (CRÉ), at LabVille and at the Chaire de recherche du Canada en Éthique publique et théorie politique.

The Centre de Recherche en éthique (CRÉ)

The CRÉ is a strategic cluster of the Fonds de la Recherche du Québec, which brings together 40 regular members (affiliated to 7 universities, plus one affiliated school) and 30 collaborating members.

 The CRÉ’s mission is to contribute to the advancement of knowledge and the training in the field of ethics, in the broad sense, to understand the study of its foundations and major concepts, as well as that of the normative dimensions of public policies used in various fields, such as medicine, natural and human environment management, wealth redistribution or social and cultural diversity.   

The CRÉ’s goal is also to present the results of its researchers’ work within civil society and inform public policy decision makers that can affect the well-being of its citizens, or social justice. The CRÉ is divided into 5 lines of research - fundamental ethics; ethics and politics; ethics and the economy; ethics and the environment; ethics and health - each with their own working team.  

Through a collaboration with IVADO, the CRÉ has developed expertise in artificial intelligence ethics. The CRÉ positions itself as the leader on the subject in Canada.  


The CRC en éthique publique

The Chaire de recherche du Canada en Éthique publique et théorie politique is associated with the Centre de Recherche en Éthique, and its goal is driving knowledge in the field of democracy theories. The CRC team is led by Marc-Antoine Dilhac, professor in the Department of Philosophy, brings together a dozen students, and relies on international collaborations with academic institutions and non-governmental organizations.



By Christophe Abrassart – Professor, Faculté de l’Aménagement – UdeM - Lab Ville Prospective

Goal of the coconstruction approach

The first version of the Montreal declaration on Responsible AI, presented November 3, 2017, during the Responsable AI Forum, is the foundation of the coconstruction process. Schematically, after having defined the “what?” (“which desirable ethical principles should be gathered in a declaration on the ethics of artificial intelligence”), it’s a matter in this new phase of anticipating with citizens and stakeholders how ethical controversies surrounding AI could arise in the next few years (in the fields of health, law, smart cities, education and culture, the workplace, public services) to then imagine how they could be solved (for example, with a device such as sectorial certification, a new actor mediator, a form or a standard, a public policy or a research program).


The goal of the coconstruction approach and its workshops is namely to exemplify and test the principles of the Montreal Declaration for Responsible AI thanks to potential scenarios set in 2025 that will help specify sectorial ethical issues, and then formulate priority recommendations for the Canadian and Quebec governments, stakeholders and the university research community. After this process which will be held in the first semester of 2018, additions to the Declaration can be formulated in case orphan ethical issues are identified, i.e. with no corresponding ethical principle in the Declaration.


Montreal Declaration Responsible AI

(November 3, 2017)

7 principles for a responsible deployment of AI in society

Co-construction Workshops

(February-May 2018)

Scenarios 2025 and Declaration principles

Ethical issues


Recommendations to the Government of Quebec and stakeholders


Montreal Declaration IA Validated Lead

(end of 2018)

More than ten coconstruction workshops are planned between February and May: 3-hour citizen cafés in public libraries, and three big coconstruction days with various citizens and stakeholders (in Montreal, then Quebec City and Ottawa).


The choice of organizing citizen cafés in public libraries is directly tied to the current reinvention dynamic of these public services in Quebec and Canada. By going from a lending space to that of an inclusive “third space” library that seeks to strengthen the capacities of all its citizens (ex. with digital literacy services, citizen support, cultural mediation and discussion areas, the lending of tools and the creation of Fab Labs), public libraries will most certainly have a key role in the responsible deployment of AI in Quebec and Canada.


The coconstruction days will be held in symbolic spaces (Société des arts technologiques in Montréal, Musée de la civilisation in Québec) and will namely focus on the meeting between stakeholders and the very diverse disciplines that must work together to imagine a responsible deployment of AI in society.

Originality of the coconstruction approach

When compared with other AI ethics initiatives currently underway in the world, this coconstruction approach will present four particularly original and innovative dimensions:


  • The use of foresight methods, with sectoral scenarios set in 2025 exemplifying through short tales how ethical controversies about AI could appear or increase in the next few years (in the fields of health, law, smart cities, education and culture, the workplace). These 2025 scenarios, which present a variety of possible situations in the face of a wide-open future, will be used to spark the debate, to identify, specify or anticipate sectorial ethical issues on the deployment of AI in the coming years. These discussions with a 2025 horizon can then help retroactively formulate concrete recommendations for 2018–2020, to guide us towards collectively desirable situations.

  • The use of participative design facilitation methods in multidisciplinary “hybrid forums”, including citizens and stakeholders, in a context of shared uncertainty in the face of possible futures (to flesh out a scenario, design ways to respond to an ethical risk, suggest an addition to the Declaration in the case of an orphan issue, i.e. without a corresponding principle). 

  • Attention to the “paradigm biases” that have very powerful reframing effects in the way they position problems (ex. tackling the ethical issues of self-driving cars strictly from the tramway dilemma angle [ex. MIT’s Moral Machine site] and in the context of the “speed-distance” paradigm in transport design), in order to ensure a plurality of issues and draw attention to still unknown or very emerging situations in a rapid change context.

A learning trajectory to develop, throughout the events, a facilitating kit that’s reproducible and user-friendly (ex. card game), adaptable and open, that could be published in “open source” at the end of the coconstruction approach.

World cafés

World cafés are three-hour-long meetings in public libraries. These meetings are, inclusive, open to all citizens, and held in friendly fashion. These meetings will be based on the World Café model.


The world café is an enjoyable conversation device that seeks to facilitate constructive dialogue and the exchange of ideas. We seek to recreate the ambiance of a café where participants debate a question in small groups. At regular intervals, participants change tables. One host stays at the table and sums up the previous conversation from the new arrivals. The ongoing conversations are therefore “pollinated” by the ideas of the previous conversations. At the end of the process, the main ideas are summed up during a plenary assembly, and possible follow-ups are submitted for discussion.

Source: Institut du nouveau monde

This method will be enriched by three elements:

  1. the reading of prospective sectoral scenarios set in 2025 to spark the discussion; 

  2. the use of a poster to document the discussions;

  3. the handout of a participant workbook presenting the principles of the Montreal Declaration for Responsible AI, a lexicon and an exemplified typology of possible recommendations. 

​Here is what a typical world café looks like:


1 PM to 1: 30 PM

Coffee and snacks


1: 30 pm TO 2 pm

Educational Introduction: introduction to the ethical and social implications of artificial intelligence (Montreal Declaration for a Responsible Development of AI), presentation of scenarios set in 2025 and of the activity.

world café

2 PM to 4 PM

Four thematic islands (on AI in health, justice, education, smart cities and the workplace) are hosted by a facilitator. Each island hosts a small group of participants (6 to 10) for two 50-minute discussions about an AI scenario set in 2025.  


Each participant takes part in two 50-minute discussions (two thematic islands with different people). To ensure continuity in the discussion, the island’s facilitator fills out a poster with the issues and recommendations stated by the participants. 

- In the second discussion, participants are invited to imagine the “front page of a 2020 newspaper” (headline and first paragraph) discussing an important initiative in Quebec for a responsible deployment of AI.


4 PM to 4:30 PM

The facilitators sum up the posters from each thematic island, followed by a group discussion.

Co-construction days

These one-day meetings will bring together citizens, stakeholders and experts that seek to further explore sectoral issues and develop recommendations. They rely on the prospective co-design model, developed at the University of Montreal’s Lab Ville Prospective.


The prospective co-design model relies on many principles, at the crossroads of design, participation and forecasting: the mobilization of typical scenarios and unknown prototypes as conversation starters, means of abandoning cognitive fixation, and exploration vehicles (that’s the design dimension); Collective participation devices bringing together players from multiple horizons, citizens and organizations as experts (for the collective aspect of the “co”); lastly, the forecasting approach which consists of projecting oneself into a possible future 10 or 20 years down the line to perform an imaginary detour and then work back from there to develop innovative paths that link the present to the most desirable futures. Michel De Certeau, in his work La culture au pluriel (1993, p. 223) highlights the otherness of forecasting: according to him, “the future engages the present on the alterity mode”. And Georges Amar, in an article on conceptive forecasting (in Futuribles, 2015, p. 21) insists on the importance of creating a narrative around the unknown to build an open future: “We prefer inefficient known properties to the promising unknown. The function of forecasting is to work on the unknown, to put words, concepts, language on it. So that while it remains unknown, it becomes more accessible, leads to reflexion, and action.”

Abrassart et al., 2017;


Here is what a typical coconstruction day looks like:


8:30 to 9 AM


Coffee and pastries

9 to 10 am

Introduction and AI Discovery Exhibition

Introductions: principles of artificial intelligence, ethical issues surrounding AI (Montreal Declaration) and forecast scenarios.

10 to 11:30 AM

Team forecast

Team forecast: starting with a trigger scenario and a variable, explore how an ethical controversy could appear or grow.

11:30 AM to 12:30 PM


Plenary presentation of team posters, and discussions with the group as a whole.

12:30 to 1:30 PM

Lunch on site

3 to 4 PM

Developing recommendations

Work in teams: using the ethical issues identified in the morning, develop recommendations (rules, sectoral codes, labels, public policies, research programs, etc.)

3 to 4 PM

Plenary presentations

Plenary team presentations and group discussion

4 to 4:30 PM


Review and observations surrounding the day

Please reload

Ville Prospective Lab


The Ville Prospective Lab was created in 2015 by professors Christophe Abrassart and Franck Scherrer. Inspired by participatory urbanism, social design, innovation knowledge and strategic forecasting, the research team’s goal is to develop and test rigorous and innovative methods and knowledge on the action of collectively coproducing what the city could become, whether it’s to imagine the energy transition and a post-carbon city, a circular city, a smart city and the management of Big Data, a city as a knowledge ecosystem, a city of inclusive third spaces or a city as an entrepreneurial ecosystem.


Its research program is divided into two branches. First, the instrumentation of urban forecasting and innovative urbanism design (ex. forecasting codesign method, transferring innovative industrial design methods to a city context, urban controversy mapping methods). The Ville Prospective Lab was therefore called upon in 2017 to accompany the Rosemont Borough in Montreal to explore ways to “play, work and live there in 2037”.


The second branch focuses on a study of the narrative as a matrix for urban collective action, through the study of narrative creation models (planning, project, forecasting fiction, rational myths), forecasting languages (words, cards, prototypes) and their preformative utterance, and the question of depiction through the narrative of scales and lived urban times (reversibility-irreversibility, polyrhythm. The Ville Prospective Lab therefore co-organized an interdisciplinary ACFAS symposium in 2015 on “The forecasting narratives: design, urbanism, literature and avant-garde art”.


© 2017 Montreal Declaration for a Responsible Development of AI

Université de Montréal

Follow us!

  • Grey Twitter Icon
  • Grey Flickr Icon