November 23, 2017
I have general comments that don’t particularly fit within the way you are asking for feedback so I am sending this initial “comment” for now. It may sound harsh though J. Here is what I had written to an internal AI team here at the NRC in response to your declaration.
I believe that all of the proposed principles need “semantic tweaking”. Consider this principle as an example:
“The development of AI should ultimately promote the well-being of all sentient creatures”.
First, they start with “the development of AI”. It is not only the development that needs to be addressed but the application and resulting outputs of AI that are more crucial than the development itself (although I understand that they probably meant to use “the development” as an overarching / encompassing term that includes the application and the resulting outputs … but that could be questioned).
More importantly however, even if the term “should” is to be loosely interpreted, it will always be impossible to achieve this principle. The application of AI will NEVER result in promoting the well-being of ALL sentient creatures. As a matter of fact, just like human decision making, specific decisions (i.e. “outputs”) will likely always negatively impact SOME sentient beings in one way or another while it serves to positively impact others..
Overall, this creates a declaration that is too utopian to be pragmatic or useful (as currently formulated). Here is an example of a more pragmatic set of “principles”:
I believe that it is crucial to have “principles” that can be “measured against” meaningfully. As they stand, I believe the principles will make it difficult to provide true guidance.