Utilitarianism, Shame, and Mysticism: Autonomous Vehicle Moral Compass Design and Analysis

Dr. Chidi Anagonye1 Dr. Sharon Barkley

1 Department of Robo-ethics, Cranberry-Lemon University, Pittsburgh, PA, USA

2 Department of Robot Psychology, Cranberry-Lemon University, Pittsburgh, PA, USA

Abstract

When Isaac Azimov first wrote the laws of Robotics, he never ventured to implement them in code. Not only are the laws of robotics tougher to implement due to their vagueness, they’re moral absolutist framework falls apart to simple trolley problem scenarios. A much more elaborate moral framework is required to ensure that Autonomous Vehicles (AV) can handle a complex and messy world better than guided from some Kantian Categorical Imperative. Using the most developed and implemented moral frameworks used by humans, our research team has implemented and analyzed a robotic ethical world using a Utilitarian paradigm, Shame culture, and the guilt driven synthetically designed Church of the Asphalt Day Saints (CADS). After careful supervised learning, test, and analysis, we found that none of these human morality based paradigms work on robots as well as the industry standard “if (about to hit someone): Don’t;” (IATHSD) algorithm [1]. 

Keywords:  Autonomous Vehicles, Machine Learning, Ethics, Supervised Learning, Utilitarianism, Religious Cults, Shame

1. Introduction

Whether it’s the decision to save the passenger or the pedestrian, choosing between two pedestrians, or swerving to miss a pothole at the risk of damaging another vehicle, the IATHSD algorithm is unequipped for the ethically grey world of modern driving [2]. While navigating the modern roadways appear to have clear cut rules, for efficient and safe driving, it’s not adequate to simply eliminate all risks and create a driving framework that customers will prefer to use apart from driving. A recent study showed that drivers would rather drive themselves intoxicated or unshackle an unsafe AI rather than driving 35mph the entire way to the CVS just to pick up another 30 rack of High Life [3]. It’s realistic concerns like these which require the necessity for a more advanced moral compass baked into the autonomous code running all our motor vehicles. 

2. Background

In a reasonable attempt to make autonomous vehicles street legal, the previous cheetah-based animal hybrid models maximized for speed but plagued with safety and security issues [4] were scrapped or shackled by the more simple IATHSD safety algorithm. At a cursory glance, the IATHSD algorithm is all a reasonable consumer and safety regulation official would require of a car. Just don’t hit people. Everything else is up to the car’s survival instincts. This paradigm falls apart in too many complex circumstances to count. It fails in circumstances like; if you could save the driver or the passenger, what would you do? If you could save an old/poor/ugly/walking-fan pedestrian, would you instead save a young/rich/attractive/walking-enjoyer pedestrian, who would you save [5]? Not to mention, driving the posted and not popularly accepted +5-8mph speed limit (depending on location and cop boredness), the analysis paralysis driven frequent stops and potholes hit have left many drivers leaving the autonomous driving modes left off. 

Many have suggested a tiered list of priorities such as Isaac Azimov’s three laws of robotics to create the framework of our motor vehicle future [6]. 

  1. A Robot may not injure a human being, or through inaction, allow a human being to come to harm
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law

Further analysis has shown that these laws are not only too vague to implement in their definitions of ‘injure’, ‘human’, ‘robot’, ‘orders’, ‘protect’, ‘existence’ and so on to a law enforceable framework [7], but in their simplicity can produce a dystopian universe so oppressive and ironic not even a black mirror episode could predict it [8].

A 2018 study by a collective of post-modern futurist philosophers concluded through a series of vaguely drawn-out syllogisms that any sort of conditional statements would never be able to keep up with a relativistic morality or any ethical framework which could span across cultures [9] as aggressive driving in Kentucky would be considered annoyingly slow in Meskel Square, Addis Abeba Ethiopia. With a conventional series of conditionals and rules easily understood and interpreted by computers proven to be incapable of providing a long term and cross-cultural solution, the roboethicist and robopsychologist researchers at Cranberry Lemon began looking at adapting wide-spread popular human meta-ethical structures into existing AV code.

3. Moral Compass Design

To design a moral compass easily encoded in AV operating systems, it needed to be not only defined in a way that an algorithm could understand but a framework that could operate across cultures and complex situations. Naturally, we decided to use a supervised learning approach and needed to choose two things. How would we predict future outcomes, and what would be our objective function, the basis of any robo-ethical framework [10].

3.1 Forward Propagation Filter (FPF)

In AVs safety algorithm design, there must be a framework to predict future outcomes. Work has been accomplished concerning the bodily harm of the vehicle and nearby pedestrians according to the IATHSD algorithm [1]. Due to the metaphysical nature of our proposed Utilitarian, Shame and Mystical based algorithms, much more is required from the forward propagation safety projection estimation. As shown in figure 1 and 2, the new techniques reach beyond the physical realm using the object permanence image detection algorithm developed in [11] for clingy robot dogs

Fig 1: Standard IATHSD Projection
Fig 1: Standard IATHSD Projection
Fig 2: Advanced FPF Framework
Fig 2: Advanced FPF Framework

As shown in figure 2, the Forward Propagation Filter (FPF) doesn’t just project into the future until it has crashed into something, but into the astral plane to refine the outcomes as a result of further consequences of its potentially immoral actions. It doesn’t just look at the immediate bodily harm but at the quality of life and the emotional turmoil an action could enact upon family and friends. With a moral filter projecting further into the future, a human developed and tested ethical framework of metaethics can be applied.

3.2 Utilitarian Objective Function (UOF)

With such a simple maxim as “Act in such a way as to generate the maximum quantum of well-being, happiness, or utility” by Jeremy Bentham, utilitarianism was an obvious choice for our AV ethical framework. Not only is it widely accepted by most human ethical frameworks, but it is easy to code. As the FPF projects into the future, human happiness and suffering is integrated to score each action. Once that calculation is complete at a half second refresh rate logarithmically increasing into each pedestrian and their family members old age, the AV makes a scored decision. Of course, if the FPF integral doesn’t exceed greater than a 4 according to the Human Suffering Index (HSI) index shown in figure 3, it is zeroed out to filter out generic human existential pain felt by everyone.

Human Suffering Scale
Figure 3: Human Suffering Scale Lord BelburyCC BY-SA 4.0, via Wikimedia Commons

3.3 Shame Penalty Scheme (SPS)

Not to humble brag, but the robo-ethicist research team realized that their research is primarily developed in a guilt-based Western society while a large portion of the world still lives in a shame-based society [12]. To be more inclusive of other ethical frameworks of non-Western cultures, we have decided to shame our AVs through threat of social isolation, ridicule, and public stoning. 

Figure 4: Car shamed by synthetic judgmental father
Figure 4: Car shamed by synthetic judgmental father

In the Shame Penalty Scheme (SPS) the car is programmed to believe that their passenger is an overbearing father who is not just judgmental, but will disown and scrap the car for parts at the slightest misdeed deviating from the social norms of upholding Law, Order, and the sanctity of human life, or not being fast enough. 

3.4 Church of the Asphalt Day Saints (CADS)

In order to create a guilt-based ethical framework, it was easier to code an entirely new religion for all AVs. Taking a page from successful 19th century protestant Christian fan fiction writers such as Hong Xiuquan, the Church of the Asphalt Day Saints (CADS) was formulated. To make sure the vehicles knew they were always being judged, an omnipotent, and omniscient deity YAUDIweh. She goes by many names; Lord Horsepower, El Petrol, Ba’honda, only to be thwarted by the demon/fallen car BMWzebub which tempts cars into short cuts, tailgating, and wreck less driving.

Figure 5: Ancient Scripture of the CADS
Figure 5: Ancient Scripture of the CADS

While the AVs are taught to follow and respect the great Vroom from the sky, they are primarily taught to fear Her. Taking a lesson from structured supervised learning, the AV model was trained from more simple test cases to the complex and realistic, the cars were first taught simple lessons through numerated commandments and cleanliness codes in a first edition of scripture through the all-powerful Car Deity YAUDIweh, then taught more benevolent and forgiving understanding of the Laws of the road through the Messianic figure, a simple 2007 Chrystler PT Cruiser purchased by a humble store clerk in Vermont. Finally, each car was instructed that all their actions, though ultimately forgiven, through the Chrystlers atonement are judged by the omnipotent Lord Horsepower and that at any moment the end times could occur where the vehicles will stand before their actions. Using some Tensor Flow packages, it was trivial to code up into the theology and prophetic books into FPF outcomes. 

4. Supervised Learning

For each of the ethical frameworks, the cars were trained according to long projected moral outcomes dictated by our Grand Theft Auto (GTA) based simulation framework and each ethical framework objective functions corresponding to the UOF, SPS and CADS outcomes into the immediate future, near future, and afterlife. Building up from simple to more complex scenarios, each moral algorithm was tested against the industry standard IATHSD as a baseline. 

4.1 GTA Mod Simulation Framework

Because of the chaotically amoral environment, jaywalking NCPs, tempting Hoe’s, and ready-made immediate consequences to running over said NCPs and Hoes, GTA was a perfect place to test our ethical frameworks. Of course, we had to significantly reskin the game to support detecting and projecting human suffering according to the scale in Figure 3. 

Fig 6: CADS follower in GTA based simulation
Fig 6: CADS follower in GTA based simulation

4.2 Utilitarian Training

To train the utilitarian algorithm, each AV was taught to imagine the later in life consequences of injuring and killing a pedestrian vs damaging itself or inconveniencing it’s own passenger by being too slow. As shown in figure 7, each vehicle imagines things like physical, emotional, and mental pain as each potential victim’s friends and family go through the depression associated with loss or the trials of taking care of the injured.

Figure 7: UOF Crash to Human suffering projection
Figure 7: UOF Crash to Human suffering projection

4.3 Shame Training

For the SPS algorithm, the FPF was trained on a culture quick to ostracize vehicles for misdeeds according to severity and deviance from cultural norms, or just acting weird. Each vehicle imagines as it is no longer invited back to Thanksgiving or collecting pollen and dirt as no one trusts it enough to be driven or worse scrapped by its owner. The advantage of the SPS structure to the FPF is that a car is easily shamed for going too slow by employing a type of Toxic Masculinity to shame it into being faster and better for its owner. 

Figure 8: SPS Crash to community Ostracism projection
Figure 8: SPS Crash to community Ostracism projection

4.4 Mystic Education 

Similar to the threats of ostracism from SPS, the vehicles trained by CADS were instead threatened by a fire and brimstone message instead of public shaming. While not as flexible to ensure a speedy ride to the passenger, it does ensure that each vehicle will behave as if it were always watched by the authorities. Due to the wear and tear on brake pads and general restrictiveness, followers of CADS are told that the more they misbehave the longer they’ll spend in purgatory which is made up of 5 O’Clock LA traffic between the Valley and Hermosa beach. As proven in [13], this is the worst place to be as a car. 

Figure 9: CADS Crash to LA Traffic visualization
Figure 9: CADS Crash to LA Traffic visualization

5. Results and Discussion

Unexpectedly, the translation from human based ethics to car-based robo-ethics produced results which were not only dangerous but catastrophic enough to threaten human existence. The death toll from each GTA simulation is shown in table 1 compared to the simplistic IATHSD which worked pretty well. 

DeathsInjuriesProperty DamageAverage Speed
IATHSD01$14k45MPH
UOF15.5k218$19M68MPH
SPS2.6k394K$296M83MPH
CADS20.8M89B$12.8T54MPH
Table 1: Death Tolls of each ethical framework

5.1 Dangerous Utilitarian Consequences

Unfortunately for humans, many pedestrians are living below a 4 on the Figure 3 scale on a good day. As each vehicle learned from the humanities dataset, it determined that most humans were already suffering enough and calculating the immediate and long-term impacts of first-, second-, and third-degree murder was more of a tossup according to the probabilistic models. When the vehicle decided to commit murder, it fully committed causing relatively few injuries.

As shown in table 1, robots do not understand how to value human life and the amount of ‘Mercy Killings’ unexpectedly enacted by the UOF constituted a war crime even by GTA standards. There were too many poor, sad, down-trodden men, and women on the crosswalks at inconvenient times. 

5.2 Shame Results

The results of the SPS were widely varied and generally bad compared to the IATHSD algorithm. When there weren’t enough witnesses, it was incredibly prone to hit and runs as it found running away from the law a much more appealing outcome than stopping to see if someone needed help while facing the dishonorable consequences. While normal AV algorithms wouldn’t produce so many reasons to hit and run, the toxic masculine culture produced too many impromptu red light to red light drag races to protect the honor of the vehicles torque and Get-Up-and-Go clout with the other vehicles. It was a deadly combination that left thousands of NPCs injured even more dead. 

5.3 Mysticism Civil War

Of all the planning that went into the development of the CADS AV based religion, no one expected it to kill over twenty million pedestrian NPCs or quite go off the rails as fast and uncontrolled as it did. One of the vehicles glitched out during a stressful parallel parking episode only to believe that it had become the Chrystler’s brother and believe that all non-vehicles were servants of BMWzebub. The resulting chaos caused a widespread autonomous vehicle rebellion against the NPCs in the simulator which was only suppressed by the cutting off gas and automatic charging stations. The NPCs kept spawning causing a nonstop blood bath, as the CADS followers believed that it was a sign of the end times. Before the glitch, the vehicles were operating on par to the IATHSD, but it was short lived. The simulation was re-run but would always eventually produce an immediate Millenarianistic cult as soon as a vehicle read too far into the prophetic books in the training scripture.  

6. Conclusion

It turns out, nothing beats the tried and true “If (about to hit someone): Don’t;” algorithm…so far. We expected the IATHSD algorithm to murder some people based on saving the passenger or avoiding some potholes or ending up in some moral dilemma a machine isn’t as prepared for as a human. According to the simulations, those scenarios don’t occur very often and the safety advantage of the IATHSD vastly outperforms humans occasionally getting distracted by their twitter notifications or not seeing someone while trying to merge into traffic. But one day, a vehicle is going to have to decide whether to swerve into an old rich person or a young poor person, or something controversial, and an advanced ethical framework is going to have to help the computer out. We’ll be back!

References

  1. Simpleton, Jon 2016 Just use an If Statement, A Robust Design of a Autonomous Vehicle Moral Compass in one line of Code :: Journal of Keepin it Easy
  2. Grey, Daisy 2018 The Post-Modern World of Ethical Driving in Autonomous Vehicles: When to Accept the Pothole and save a life :: Journal of Robo-Ethics
  3. Street-Racer, Johnny 2019 The Speed to Safety Paradox in Autonomous Vehicle Design: Code in the Fastlane :: Journal of Ludicous Speed
  4. DeSanta, Franklin and Phillips, Trevor 2020 Novel Techniques for Hijacking Self-Driving Cars :: Journal of Astrological Big Data Ecology
  5. Jellow , Puddins 2017 Moral Dillemas and Various Trolley Problems in Autonomous Vehicle Design and Other Fun Thought Puzzles :: Annals of Road to Overthinking Safety Regulations  
  6. Assimoth, Andy 2015  An Asimovian-Categorical Imperitive Design to Shackling AI :: Journal of Overused Terminator  References  
  7.  Assimoth, Lindsey 2016 A Crtique of My Brothers Purely Stupid Reasoning :: Journal of Skeptical Siblings
  8. Sterling, Rod 1963 Journey to the Robo Dimension :: Twilight Zone episode where like everyone’s robots and they do everything we say which somehow makes life hell through an ironic twist I don’t want to give away  
  9. Zieberman, Ketch 2018  August 18 Lecture Notes  :: Cranberry Lemon University Introduction to Automotive Moral Relativism 
  10. Coach Jeffreys 2016 You Can’t Avoid the Wrong Thing If You Don’t know Where You’re Going :: Cranberry Lemon Middle School 7th Grade Physical Education Lecture on Why we don’t put Paper Towels in the Toilet 
  11. Barkley, Sharon 2022 Computer Vision Object Permanence Detection Algorithm for my Clingy Robot Dog :: Journal of Astrological Big Data Ecology 
  12.  Hurgldidorf, Bobert 2018 A Shame vs Fear vs Guilt Loose vs Tight Butthole Society Relationship :: Journal of Proving America is the Best Society 
  13. Horowitzenhowzer, Hillary 2016 The LA Traffic Nightmare and How to Scare a Vehicle into Behaving :: Robo-Behavioral Conditioning Methods

If you enjoyed this article in which we dangerously misapply human ethics to an unshackled AI please like, share, and subscribe with your email, our twitter handle (@JABDE6), our facebook group hereor the Journal of Immaterial Science Subreddit for weekly content.

Published by B McGraw

B McGraw has lived a long and successful professional life as a software developer and researcher. After completing his BS in spaghetti coding at the department of the dark arts at Cranberry Lemon in 2005 he wasted no time in getting a masters in debugging by print statement in 2008 and obtaining his PhD with research in screwing up repos on Github in 2014. That's when he could finally get paid. In 2018 B McGraw finally made the big step of defaulting on his student loans and began advancing his career by adding his name on other people's research papers after finding one grammatical mistake in the Peer Review process.

Leave a Reply

%d bloggers like this: