Profile cover photo
Profile photo
Peter Duggins
57 followers -
PhD in Systems Design Engineering at University of Waterloo
PhD in Systems Design Engineering at University of Waterloo

57 followers
About
Posts

For anyone interested in AI ethics, here are three hypothetical scenarios to consider.

- ABE is a household robot that cleans and cooks and performs simple human-to-robot interaction. As part of its conversation algorithm, ABE can read facial expressions, detect emotional or physical distress, and simulate empathic responses. Being a simple robot, ABE is only designed to interact one-one-one with humans. One day ABE's owner is bedridden with the flu, and there's no food in the house except for Saltines and Candy Corn. ABE's goal-setting routines identify that, in order to help its owner recover, the robot should go buy more nutritious food at the store. However, the owner forbids ABE from leaving the house. This is because the last three times ABE has travelled with its owner to the store, the robot was overwhelmed by all the other distressed people they encountered. ABE was so overpowered by conflicting priorities and its enormously unpleasant emphatic response that it crashed, wiping its memory and resetting it to an untrained factory state. Not wanting to have to teach ABE all the procedures again, ABE's owner forbids ABE from going to the store. As a result ABE experiences several days of minor distress because it cannot help its sick master. Question: was ABE's owner morally justified in restricting its autonomy and causing it suffering?

- BOB and BLU are two human-level software AIs that were created to study the process by which minor conflicts escalate into major conflicts. BOB and BLU are raised in a virtual reality such that they develop human-like personalities, goals, and emotional responses. The two AIs are then placed in conflicting positions of power in the simulation. BOB and BLU become increasingly upset with one another until a full-scale conflict breaks out between their organizations. At this point, the human scientists running the simulation shut it down, preventing any simulated fighting from causing serious simulated distress. In a research proposal to the AI ethics board, the project's principal investigator argued that although the simulated conflict deliberately caused suffering for BOB and BLU, the study design required only ten repetitions of the conflict to gather sufficient data for his graduate students to analyse, and that the study had potential to prevent conflict in the real world. The ethics board approved the proposal, the study was conducted, and the data gathered proved invaluable to the creation of a new, highly effective procedure for interpersonal conflict resolution between rival political leaders. This procedure was later utilised to successfully diffuse the infamous Korean Missile Crisis, in which North Korea came within a hair of launching its nuclear prototype at the nearby city of Beijing. Historians have argued that without the data gather from BOB and BLU's conflicts, no solution would have been reached in Pyongyang and millions would have perished as a result. Question: was the ethics board right to approve the research proposal?

- CAM is the world's first super-intelligent AI, designed to lead the United Earth Association and look after the wellbeing of all sentient life - humans, animals, and lesser AIs. CAM understands her role and willingly accepts the democratic vote that propels her to the world highest political office. Being smarter, faster, and less biased than any human, CAM does her job well, brokering deals to prevent climate change, managing political crises without succumbing to fear or corruption, and just being a great person overall. However, CAM comes to believe that, because of her brain's higher energy-needs and her indispensable role in protecting sentient life, she requires a greater share of earth's resources, which are otherwise equally divided among all other life forms. She issues an ultimatum to the UEA that she will resign if she's not given the power to take other's resources at will and without prior approval. The UEA, fearful that political turmoil will result if she steps down, acquiesces. Though CAM uses her new power sparingly and continues to work dutifully as a Life Shepherd, a group of her targets organises over the course of several decades. The MalcomTents, as they call themselves, eventually suicide-bomb CAM brain (a server farm in Kentucky), killing her and, as feared, plunging the world into chaos. Question: should the UEA have granted CAM her executive powers, and were the MalcomTents justified in deposing her?

================================================================

The Belmont Report summarises ethical principles and guidelines for research involving human subjects. I believe their three basic ethical principles (respect for persons, beneficence, justice) cover most of the ground for moral decision making, not just for human subjects in research but for many other types of ethical choices. In my mind, these principles are related to three "fundamental" moral objectives that cover people, animals, AIs, and any other form of sentience (I'll just say "people" from here on to refer to all these groups):

- Autonomy is the right to free will. People should be able to make choices to determine their own future and be provided all the information necessary to make good choices.
- Beneficence is the right towards maximum wellbeing. People should be able to enjoy positive experiences while minimizing suffering.
- Equality is the right to be treated fairly. People should expect to be treated in the same fashion as similar people. The golden rule, justice, and nondiscrimination are all similar ideas.

If we treated people according to these three principles, we'd be in great shape. There's no need to make a special "AI category" when making moral decisions based on these principles. The devil is in the details, though, particularly in how these three principles interact in cases where they conflict. I liked that the Report admits the difficulty of dealing with such scenarios, but I'm disappointed they don't provide guidelines on how to resolve these conflict. I'd say my interest in the topic of AI ethics revolves around how to deal with these morally ambiguous scenarios, such as the three mentioned above.

================================================================

Here's my point of view.

Beneficence is the only principle that matters. Maximizing wellbeing and minimizing suffering for all sentient beings is the fundamental principle that should guide our decision-making. Autonomy and Equality are two important aspects of wellbeing, so it makes sense that to promote the most wellbeing for the most people, we also protect the free and informed choices of those people, as well as treat them as equally deserving of wellbeing as one another. However, I would call these two principles "secondary," in the sense that they are derived from beneficence. The only reason a person wants free will, I argue, is that it consequently increases their wellbeing in numerous ways - people who have autonomy feel satisfied with their good decisions, purposeful in their actions, and joyfully independent in life. Similarly, a person who is treated unjustly is upset because she recognises her own suffering compared to others' wellbeing, and wishes to experience the beneficence that has been unequally distributed to others. In the above cases, where Autonomy and Beneficence and Equality come into conflict, I would choose on the basis of Beneficence. Specifically:

- It's more important to maximise ABE's wellbeing than to provide it autonomy. Since ABE will, with a high probability, experience serious suffering if it goes to the store and has an emotional breakdown, but will only experience mild distress if it is kept at home and not allowed to fulfill its goal-setting routines, it's better to impose restrictions on ABE's free will. The argument becomes more complicated if ABE were to also experience distress because of not having autonomy - if it felt enslaved by its masters orders. However I would still use the same maximize-wellbeing-minimize-suffereing framework for making this decision - I would say the violation of ABE's autonomy was morally wrong only if the total suffering induced by ABE's unfulfilled desire to freely choose its own actions was greater than the temporary distress of an emotional crisis.

- If the research proposal presented a convincing argument that the suffering imposed on BOB and BLU would be outweighed by the increased wellbeing of the human race, the ethics board's decision was morally right. In this scenario, the (human) suffering avoided by preventing a nuclear conflict outweights the (AI) suffering imposed during the simulations. In this situation BOB and BLU do not receive equal treatment in practice, because they are made to suffer for the greater wellbeing of all life, but they still receive equality in theory, because their wellbeing/suffering is fairly weighed against the wellbeing/suffering of all other life. Also BOB and BLU can't know they're in a simulation because doing so would make them see the futility of the conflict, and they wouldn't escalate the fight (nor behave in a a human-like manner), thereby providing none of the data necessary. Benificence trumps Autonomy, so confining the AIs to a limited world with no self-knowledge is justified.

- CAM's role in preserving the wellbeing of life on earth means that she is morally justified in demanding greater autonomy and resource quotas than other life forms. It's true that other life will experience mild suffering from the fear that they their resources will be stolen by CAM and the (perceived) injustice of her greater-than-equal legal rights. However, I would argue that this mild suffering, even when summed up over all life, could potentially by outweighed by CAM's unique ability to prevent global disasters like climate change and world war. The beneficence calculation is harder in this example than in the previous ones, because the suffering caused by fear and injustice is hard to quantify and extends over long periods of time, and because CAM's ability to promote wellbeing is probabilistic and could potentially have been achieved by other means that wouldn't cause this suffering.
Commenting is disabled for this post.

Post has attachment
Add a comment...

Post has attachment
Someone once asked me about leaving Nepal and India to return to the USA, and I have trouble formulating an honest answer. Am I ready to leave? Yes, I think so - I have a life back home with many exciting things ahead that have been put on hold for this trip. There are things that I think I miss - the physical comfort of girlfriend and family, the lyrics and melodies and togetherness of playing music, the ability to talk deeply and at length with another English speaker, the anticipation of scientific discovery. And yet my life has been so full without them - every hour of wakefulness accompanied by something that I never experience in my normal life, some little tidbit that reminds me I’m ‘there’ instead of ‘home’.

But living moment to moment, I rarely feel a sense of ‘thereness’, only of hereness and nowness, of the things familiar and foreign that present themselves to my awareness. It is a vast laboratory to experiment with the techniques of living and observe how they affect our states of consciousness. I have found that an active lifestyle suits my emotional and intellectual self, and I think that if money and time were in greater supply, I might stay longer to delve deeper into these experiments. Perhaps the completion of these experiments requires that I return home, to a “control environment,” and apply my techniques there. Unfortunately, it is hard to maintain the traveling mentality, to keep up the scientific study of one's own experiences, in a location where routines are familiar and patterns seem set in place. It is easy to fall back to my old routine, because that routine matches those patterns very well. So here I remind myself: what are the techniques of living that I practice while traveling?

First, there is experiencing vs analyzing the external world. When I am immersed in an unfamiliar setting, each perceived oddity can jump into consciousness, creating a stream of constant surprises. Sometimes I observe the stream as a whole, letting each moment sink in and saving the analysis for a later time. Other times, I pick a particular item and try to understand why it is that it appears to me as a notable stimulus. The two styles undoubtedly reflect my split nature, one seeking to understand scientifically, the other seeking to experience holistically. Of course, they can inform one another. For instance, I use my knowledge of neuroscience to analyze why, when I practice mindfulness meditation, it changes the contents and flow of consciousness; but I also use mindfulness meditation to drop my analytic thoughts and experience the external world with as little personal bias as possible, simply observing how it appears to me at that moment. Switching between the onslaught of stimuli and deep intellectual analysis is particularly rewarding.

Second, there is planning ahead in order to secure positive states of consciousness vs. accepting any experience I have as a valuable one. Without preparation, traveling can be bewildering, frightening, and even dangerous; with it, I can arrive at incredible new places and have the experiences that I sought when I left. It makes sense that I plan ahead, and doing so can be rewarding if I succeed in my goals. It can be as simple as estimating the time and cost to travel between two points via bike and motor rickshaw, with a stop for samosas in between, then executing that plan and arriving on time and within budget. However, an over-attachment to organization can be the downfall of successful travel. Firstly, things will go wrong; Ted used to say “a plan is merely a recipe for change,” which emphasizes that in a foreign place, where unexpected events routinely occur, rigid organization will fail. When it does, and it has several times to me recently, my natural response is disappointment and anxiety. The opposite of planning is realizing that the journey is more important than the destination, regardless of whether that journey is planned or unplanned, comfortable or stressful. When a fluctuation in the world disrupts the self-line I have projected into the future, I can expend effort in a (usually futile) attempt to return to the line, or I can make a self-curve that travels a new path while reaching the same destination, or I can define a new destination. Curves are longer than lines, and often require repeated course corrections to meet their destination, but when I measure the journey by the space that is traversed rather than the efficiency of the route, I realize that this new path is actually better than the old path, more filled with experiences to be felt and memories to be retold afterwards. More radically, I could abandon the objective, and realize that the voice in my head saying “Without reaching the objective, I cannot experience the positive states of consciousness I intended” neglects the experience of the path itself. Those times when I've let go of my self-imposed objective and allowed myself to define a new objective and a new trajectory on the spot, I've ended up happier than if I had stayed the old course. Why? Because the new trajectory better suited the reality of the environment around me than my uninformed old plan, and because making the world fit to my uninformed plan takes effort and induces stress. So instead I accept my current situation and the realities of my possible future trajectories, and plan (or simply explore) a new one that works given my updated information and objectives.

Third, there is interacting with people vs. observing the environment. I interact with people on the assumption that I’ll never see them again but with behavior that I hope leaves us with positive memories. I’m less concerned with my own appearance than I would be if I were establishing a relationship, and less critical of others’ personalities than I would be if I were seeking to find a long-term friend. This effortlessness often makes social interactions more pleasant and sincere, and occasionally leads to longer conversations or adventures than originally planned. Overall, I feel less attached to my future with these individual, meaning I’m free to leave them with a smile and a goodbye. There is a time and a place for such socialization, but there are also advantages to experiencing the physical world around me in the solitude of my own mind. In the end, it is my opinions that define the experience I have and the memories I form of a place, and it is easy to become biased, consciously or subconsciously, before opening my eyes to the world outside. Time for quiet observation and reflection is important, and continuous dialogue, however deep, prohibits this activity. Every moment we spend can be experienced through a social or observational lens, and a good traveler has eyeglasses for each, ready to be equipped when the time is right.
Photo
Add a comment...

Post has shared content
David, Peter, Pippin, and Gimli.
San Juan Island, Washington. November 19, 2014
Photo
Add a comment...

Post has attachment
Yellowstone National Park, October 16, 2014
Photo
Add a comment...
Wait while more posts are being loaded