The Moral Machine Experiment

I’ve dealt with scenarios that pose ethical quandaries to individuals in the past,  participating in the trolley experiment for instance. My usual stance on these issues is to side with the utilitarian point of view, which often allows me to make quick decisions on these subjects (which, in the case, this was truly applicable to our individual self, would likely be necessary). Surprisingly, however, there were situations where the ethical system I have in place would fail me, and it was a fair coin toss which situation I ended up picking.

First, I want to share an overview of my results:

My results Part I
My Results Pt II
My Results Pt III

I think that my most saved character and most killed character can be justified to some extent with the ethical system of utilitarianism. The young being preferred over the middle-aged due to the number of years left in their lifespan (and thus potential contribution they can offer), mixed in with the idea that an individual who I assume is homeless, does not offer much value to the collective.

I think the results that surprised me the most were the scores I gained for the species preference and saving more lives preference, where I was below average (most people had a more extreme perspective). I think, however, that this is due to the fact that for certain situations, I’ve demonstrated a tendency to side with deontological ethics. For instance, if in any situation there is an option where individuals within the car do not kill others, instead of dying themselves, I’ve taken that decision. While one can argue that this simplifies the process of dealing with the aftermath (imagine the court cases and legal troubles that will arise, dealing with the ethics and legality of self-driving cars and their tendency to kill individuals in the pursuit of passenger safety) and is therefore utilitarian in nature, that’s a rather weak point in my opinion (the real reason for me choosing that option is so that the individuals in the car do not end up murdering anyone).

Human beings are selfish, and if in any of the thirteen scenarios, the unknown individuals had been replaced by ‘loved ones’ of ours, people’s ethical systems would likely break down to prioritize their loved ones (my utilitarian perspective would surely no longer be considered). The main takeaway I have here is that following one ethical system, whether it focuses on the consequences of an action as opposed to determining whether an action is inherently right or wrong, is something more humans are not capable of following (and is simply not reasonable to expect). Developing a moral code that adapts with the situation is what most end up employing, and likely the most practical- despite the ethics of changing your morals to match a particular situation.

 

 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *