Related Tags
Moral dilemma and technology

Moral dilemmas used to be safely pinned to philosophy for more than 6000 years. What I mean with the expression »safely pinned«? As an issue of philosophy and more specifically, moral philosophy, it stayed valid and crucial for the reason it was never resolved. For if it were resolved it would cease to persist as a topic of philosophy. Philosophy primary value is to keep us engaged with questions that each of us has to decide upon in life in moments of truth. The narrative that we develop in dialogue with philosophy help us choose better in moments of truth.
With “thinking machines” at least one of moral dilemmas escaped from the safe domain of philosophy. At the moment, all producers that develop self-driving cars face that moral question that underlies almost every moral dilemma in general:
Would you decide to kill one person (child for instance) for sure if only another option were to most probably (but not necessarily) kill dozens?
The shift of the question from personal experience and philosophy to technology places a solution algorithm in a machine as an agent, so there is no place for questioning any more. A human driving self-driven cars cease to be an agent concerning the crucial moral dilemma. What does that bring?
Morality and moments of truth
First of all, how does human agency happens? With the little help of introspection but with the more significant help of neuroscience, cognitive philosophy, evolutionary theory and other disciplines that investigate human behaviour concerning neural activities, we are now pretty sure that moral dilemmas are solved a-priory, pre-rationally. In moments of truth, we as agents act solely driven bu our limbic system. We decide based on the vast residuum of 4 billion years of evolution on Earth, our own experiences and last but not least on those cognitive solutions that were repeated to be often imprinted in our limbic system. That is why we can procrastinate up until the last moment when we somehow trust that our intuition will “decide”. We »know« that our ratio is much to slow for making decisions in moments of truth.
Please note that I use concept moments of truth as I use it in my branding theory, for a reason. For it is subconsciousness, limbic system operation, that takes over agency in all instances of moments of truth. It is no different when we face a critical situation while driving a car form a situation when we decide upon which brand to buy in a supermarket.
With self-driving cars, a strange shift happens. A rational being constructs agency of the car. Cars have to »decide« upon the pre-designed algorithm. No place for limbic system any more. No place for something so crucial for the evolution not only for humans but for the best part of the animal kingdom.
Not knowing anything about moral algorithms, it should be already clear that they are impossible. Or better: until they, machines, develop subconsciousness and all procedures that form limbic system, until they engage in moral philosophy, they will be unable to act as moral agents or at least to pretend to act as moral agents.
But then let us check how some of the self-driving cars are addressing the moral dilemma.
Thinking driving machines
Mercedes already decided that drivers are going to be those that will be on the top of the priority to save their lives, which makes sense of course. But it makes sense only for situations in which “the machine” will be able to differentiate the safety of passengers from the safety of all the rest. Such decision will be futile in cases when it is clear that the situation presents no threat to passengers but for two pedestrians or one pedestrian and one cyclist, or one driver in another car and one pedestrian, and so on and so on.
What is quite clear from the present state of algorithm development is that it should include all possible situations. “The machine” should be able to decide upon weighted evaluation of an enormously large number of different situations. Should it kill 50years older adult before 50 years old woman? Is 80years old woman worth less than 20 years old student? What if that student is seriously ill so she will die in the next six months? Is 11 years old child worth more or less than 12 years old child? What about a 30 years old couple in comparison to 18 years old junkie?
And there are other dilemmas like that one just experienced on German TV. Is Lars Koch guilty (or not) for killing 164 passengers in an aircraft while avoiding the only other solution to kill 70.000 people on Allianz Arena watching a football game between Germany and England? Germany decided that he was not guilty. But this dilemma was quite easy since passengers would die anyway.
What is quite apparent is a moral dilemma is unsolvable through technology as long as the technology works on consciously devised algorithms. Even if such machines can learn, this does not solve the issue. For if it would only upgrade the base of possible situations, it would still rest on weights that would come from one or another moral dilemma solution devised by humans. Even if wights would be adjusted, some other code based on which weights would be self-adjusted should have been consciously developed beforehand and implanted in such machines.
Moral machines?
To overcome such gap “thinking machine”, “rational machine” should evolve into a moral machine. That means that such a machine should be able not only to understand the moral dilemma but to feel it. And we all know what makes us (humans) to really feel moral dilemma: the fear of death. So the machine that could run self-driving cars should not only be able to die but be conscious of that option and be afraid of that option. Such a machine would be a negative zombie. It would not look like a human, but it would feel like a human.
It is thus quite clear that such machines could only evolve if matter (machine) can really become alive and conscious through emergence as an underlying principle of evolution as we understand it at the present moment.
Should that not be possible, then we do have a not only unsolvable problem in the development of self-driving cars but also in the present concept of evolution and current theories of how consciousness emerges from brain activity for instance. For if the emergence of consciousness and life rests on vast enough complexity only (materialistic solution of Darwin, Dawkins, Dennett etc.), then such thinking and feeling machines are possible. If not, then not only the car industry but materialism and evolutionary theory have an unsolvable problem. Then only a God can implant the solution into a machine as He did according to creationists cosmogony.