Intelligence, whether it be natural or artificial, has no value in and of itself. An individual’s intelligence does not tell us anything about his or her morality; this is also the case for any other intelligent entity. Intelligence can, however, have an instrumental value: it is a tool that can lead us away from or towards a goal we wish to attain. Thus, artificial intelligence can create new risks and exacerbate social and economic inequalities. But it can also contribute to well-being, freedom and justice.


From an ethical point of view, the development of AI poses previously unknown challenges. For the first time in history, we have the opportunity to create non-human, autonomous and intelligent agents that do not need their creators to accomplish tasks that were previously reserved for the human mind. These intelligent machines do not merely calculate better than human beings, they also look for, process and disseminate information. They interact with sentient beings, human or non-human. Soon, they will even be able to keep them company, as would a parent or a friend.


These artificial agents will come to directly influence our lives. In the long term, we could create “moral machines,” machines able to make decisions according to ethical principles. We must ask ourselves if these developments are responsible and desired. And we can hope that AI will make our societies better, in the best interest of, and with respect for, everyone.


The principles and recommendations that we are asking you to develop collectively are ethical guidelines for the development of artificial intelligence. In this first phase of the declaration, we have identified seven values: well-being, autonomy, justice, personal privacy, knowledge, democracy and responsibility. For each value, you will find a series of questions that seek to explore its relationship with the development of AI. We then put forth a general principle, one that does not directly answer the questions asked.

  • How can AI contribute to personal well-being?

  • Is it acceptable for an autonomous weapon to kill a human being? What about an animal?

  • Is it acceptable for AI to control the running of an abattoir?

  • Should we entrust AI with the management of a lake, a forest or the Earth’s atmosphere?

  • Should we develop AI technology which is able to sense a person's well-being?

Proposed principle:

The development of AI should ultimately promote the well-being of all sentient creatures.

I want to share my thoughts >>

  • How can AI contribute to greater autonomy for human beings?

  • Must we fight against the phenomenon of attention-seeking which has accompanied advances in AI?

  • Should we be worried that humans prefer the company of AI to that of other humans or animals?

  • Can someone give informed consent when faced with increasingly complex autonomous technologies?

  • Must we limit the autonomy of intelligent computer systems? Should a human always make the final decision?

Proposed principle:

The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.


I want to share my thoughts >>

  • How do we ensure that the benefits of AI are available to everyone?

  • Must we fight against the concentration of power and wealth in the hands of a small number of AI companies?

  • What types of discrimination could AI create or exacerbate?

  • Should the development of AI be neutral or should it seek to reduce social and economic inequalities?

  • What types of legal decisions can we delegate to AI?

Proposed principle:

The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic / social origins and religious beliefs.

I want to share my thoughts >>

  • How can AI guarantee respect for personal privacy ?

  • Do our personal data belong to us and should we have the right to delete them?

  • Should we know with whom our personal data are shared and, more generally, who is using these data?

  • Does it contravene ethical guidelines or social etiquette for AI to answer our e-mails for us?

  • What else could AI do in our name?

Proposed principle:

The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.

I want to share my thoughts >>

  • Does the development of AI put critical thinking at risk?

  • How do we minimize the dissemination of fake news or misleading information?

  • Should research results on AI, whether positive or negative, be made available and accessible?

  • Is it acceptable not to be informed that medical or legal advice has been given by a chatbot?

  • How transparent should the internal decision-making processes of algorithms be? 

Proposed principle:

The development of AI should promote critical thinking and protect us from propaganda and manipulation.

I want to share my thoughts >>

  • How should AI research and its applications, at the institutional level, be controlled?

  • In what areas would this be most pertinent?

  • Who should decide, and according to which modalities, the norms and moral values determining this control?

  • Who should establish ethical guidelines for self-driving cars?

  • Should ethical labeling that respects certain standards be developed for AI, websites and businesses?

Proposed principle:

The development of AI should promote informed participation in public life, cooperation and democratic debate.

I want to share my thoughts >>

  • Who is responsible for the consequences of the development of AI?

  • How should we define progressive or conservative development of AI?

  • How should we react when faced with AI’s predictable consequences on the labour market?

  • Is it acceptable to entrust a vulnerable person to the care of AI (for example, a "robot nanny”)?

  • Can an artificial agent, such as Tay, Microsoft’s “racist” chatbot, be morally culpable and responsible?

Proposed principle:

The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.

I want to share my thoughts >>




Sentient being

Any being able to feel pleasure, pain, emotions; basically, to feel. At the current state of scientific knowledge, all vertebrates and some invertebrates such as octopi, are considered sentient beings. In biology, the development of this characteristic can be explained by the theory of evolution.


Ethics (or Morals)

This is the discipline that ponders the proper ways to behave, individually or collectively, by looking to adopt an impartial point of view. It is based on moral norms and values.


Moral values

Moral values are related to good and evil: they allow us, for example, to qualify an action as just or unjust, honest or dishonest, commendable or blameworthy.


Epistemic value

Epistemic values are related to knowledge: they allow us, for example, to qualify an argument as valid or invalid, clear or unclear, pertinent or trivial.


Intrinsic value

A value is intrinsic when it is an ultimate justification, when one looks for it in and of itself. For example, well-being, autonomy and justice can be looked in and of themselves; thus they are intrinsic values.


Instrumental value

A value is instrumental when it is in service of something else, when it helps promote an intrinsic value, for example. Money and intelligence are examples of instrumental values that can be put to the service of well-being, autonomy or justice.



A possible world which embodies a collection of positive values. Thus, it can be said that a society in which AI frees people from all unpleasant work, allowing them to take care of each other while fully developing their personal potential, would be a utopian society.



This is the opposite of a utopia. It is a possible world that embodies a collection of negative values. Thus, it can be said that a society in which several corporations (or a single corporation) become extremely powerful thanks to AI, allowing them to control and exploit people, would be a dystopian society.

© 2017 Montreal Declaration for a Responsible Development of AI

Université de Montréal

Follow us!

  • Grey Twitter Icon
  • Grey Flickr Icon