Self-Driving Cars And Deciding Who Lives And Dies - Sacrificing Your Family For The "Greater Good"

Authored by Alex Thomas via,

As the establishment continues to declare that the era of “self-driving” cars is upon us, many Americans have been left wondering what the privacy implications for such a tremendous change in society will end up being.

Now, with the very real possibility that, in the event of an accident, these cars would literally make the decision between who dies and who lives, Americans have even more to worry about when it comes to handing over control of their vehicle to a supercomputer.

According to multiple reports, the cars themselves are being designed to make so-called “moral” decisions which, in other words, means that the programming would essentially allow say a car full of people to crash rather than a school bus.

USA Today reports:

Consider this hypothetical:


It’s a bright, sunny day and you’re alone in your spanking new self-driving vehicle, sprinting along the two-lane Tunnel of Trees on M-119 high above Lake Michigan north of Harbor Springs. You’re sitting back, enjoying the view. You’re looking out through the trees, trying to get a glimpse of the crystal blue water below you, moving along at the 45-mile-an-hour speed limit.


As you approach a rise in the road, heading south, a school bus appears, driving north, one driven by a human, and it veers sharply toward you. There is no time to stop safely, and no time for you to take control of the car.


Does the car:

A. Swerve sharply into the trees, possibly killing you but possibly saving the bus and its occupants?

B. Perform a sharp evasive maneuver around the bus and into the oncoming lane, possibly saving you, but sending the bus and its driver swerving into the trees, killing her and some of the children on board?

C. Hit the bus, possibly killing you as well as the driver and kids on the bus?

The article goes on to say that the above question is one that is no longer theoretical as upwards of $80 billion have been invested in the sector with companies such as Google, Uber, and Tesla all racing to get their self-driving cars onto a road near you.

Interestingly, the USA Today report, as well as those quoted within it, seems to outwardly worry that the questions surrounding self-driving cars will cause Americans to shun them and continue to drive cars operated by actual humans who can choose whether or not they wish to save their own family or someone else’s if an accident were to take place. Clearly there is an agenda at play here.

The article continued:

Whether the technology in self-driving cars is superhuman or not, there is evidence that people are worried about the choices self-driving cars will be programmed to take.




Last month, Sebastian Thrun, who founded Google’s self-driving car initiative, told Bloomberg that the cars will be designed to avoid accidents, but that “If it happens where there is a situation where a car couldn’t escape, it’ll go for the smaller thing.”


But what if the smaller thing is a child?


How that question gets answered may be important to the development and acceptance of self-driving cars.


Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”

All in all the entire report is a must-read, especially when you consider that we are now directly talking about allowing either private companies or the government to decide whether or not ourselves or our families are worthy of being saved in a car accident.

This truly is a scary situation.


MagicHandPuppet . . . _ _ _ . . . Sun, 11/26/2017 - 20:43 Permalink

On the trajectory we're on, there will come a time when the department of transportation network will know exactly who is onboard of every vehicle in your vacinity, each passenger's age, medical condition, medical history, life expectancy (based on their DNA and relevant stats), etc. in order to make the quick calculation as to how much productive life can be saved by choosing which property... eh hem, I mean humans onboard which vehicle to sacrifice.Obviously, as with ObambamCare, congress critters and their kin along with their entourage and high stake campaign contributors will be opted out of the sacrificial equations since their health defect demerits won't be in the system.

In reply to by . . . _ _ _ . . .

Slomotrainwreck the artist Mon, 11/27/2017 - 08:22 Permalink

In all situations, the safety of the passengers of the autonomous car should have the most highest priority, regardless. If an algorithm is secure enough then the computer should be able to decide if the impending danger was caused by a wrong decision on it's part, then stay in tact so the decisions can be reviewed and improved. Buying a self driving car is like purchasing better insurance.Of course, the story is pure bullshit and who cares when people die in a car crash after it's over and done with. Just call it fate and get on with your so called life. There will be daily program patches. Will the coder be responsible for outcomes?

In reply to by the artist

. . . _ _ _ . . . Stuck on Zero Mon, 11/27/2017 - 00:17 Permalink

It's a good point. The difference is that these people all have their own self-interest in mind and will do what they can to avoid accidents. When we talk about factoring death of innocents into algos, those deaths are chosen to happen. Distinctions without a difference you may say, but I would rather not turn over my safety to an uncaring machine. We are more than numbers. I don't even let my browser remember my passwords. We are taking humanity out of the equation. Machines are supposed to work for us.Of course, all air travel has been algo-driven for quite some time.Getting into a car which is self-driving is no different than sharing a highway with cars that are self-driven.As somebody has mentioned, defensive driving is the best preventive measure. I will take responsibility for my own life, thank you very much. Having said that, I often take public transport, where someone else is driving.Ultimately, the choice to board a bus, a plane, or a car being driven by an epileptic, or taking the wheel oneself, is a personal one.Perhaps if reserved lanes are set aside for self-driving vehicles, it would solve this dilemna. Being forced to put ourselves at risk by making all lanes open to this new tech is a different matter. I guess my point is that I should get to choose.

In reply to by Stuck on Zero

Jeffersonian Liberal . . . _ _ _ . . . Mon, 11/27/2017 - 07:42 Permalink

It wouldn't. It will be programmed with assumptions.

Some other assumptions will be the cost of one car versus the cost of another and that the life of the owner/occupant of the more expensive car is more valuable (as an earner, productive member of the economy, and taxpayer) than the other.

Also, it won't be long before age, financial records, and health-records are tied into the on-board computers so that those parameters will help the computer to "decide" what moves to make in order to best determine who should live and who should die in any given pending-accident situation.

Yeah, this will be programmed by some millennial who is following the regulations that come down from Big Brother.

In reply to by . . . _ _ _ . . .

TheRideNeverEnds SWRichmond Mon, 11/27/2017 - 14:13 Permalink

You’ll never get to them.

To the original question: Google will use its sophisticated algorithms to instantaneously analyze each occupants worth using data collected from their personal tracking devices and internet history according to the current victim pyramid and progressive social stack. Needles to say, if you family is white they are at the bottom and will be sacrificed for the greater good of “””diversity”””. A straight white male will be sacrificed in all cases.

In reply to by SWRichmond

chubbar tmosley Sun, 11/26/2017 - 19:39 Permalink

First, the fucking car computer doesn't know it's a bus full of kids. Second, it doesn't know what action the human driver will take. The easy solution is to program the computer to take the action most likely to result in the least injury to it's occupants, period.

In reply to by tmosley

nmewn WTFRLY Sun, 11/26/2017 - 20:10 Permalink

Not until the auto maker pays my insurance mean, if I have control taken away I can't be charged with an at fault accident and I certainly can't be sued for the cars actions while I'm simply it's passenger./////Well just damn, I've done it again by posing moral, ethical & legal dilemmas by putting the cart before the horsepower.Please, carry on ;-)

In reply to by WTFRLY

Shemp 4 Victory nmewn Sun, 11/26/2017 - 21:19 Permalink


Not until the auto maker pays my insurance mean, if I have control taken away I can't be charged with an at fault accident and I certainly can't be sued for the cars actions while I'm simply it's passenger.

Well shitmanfuck, then who needs car insurance?! (Except, of course, the auto maker.)I mean, you don't need bus insurance to ride on a bus, right?

In reply to by nmewn

Jeffersonian Liberal nmewn Mon, 11/27/2017 - 07:47 Permalink

Come on, you know how the system works.

Your premiums will rise if you do not have an auto-drive vehicle. "They" will determine that the auto-drive is the safest system and not to have one makes you a greater risk.

This is the same reason EHR software (especially the software that practices use to attest for Meaningful Use kickbacks from the State) are becoming increasingly intrusive into our private lives. The plan is to ensure the government has all the variables they need to determine who gets healthcare and who does not after they take it over completely.

In reply to by nmewn

tmosley chubbar Sun, 11/26/2017 - 19:48 Permalink

The easy solution is the car just hits the brakes with superhuman reflexes, which is why there are almost no examples of accidents caused by autonomous cars. The only one I can think of is where the car ran under the 18-wheeler that was crossing the road because it couldn't tell the silver color of the trailer from the grey sky, and Musk cheaped out on putting secondary LIDAR on the roof of the car to detect low clearance obstacles (if the trailer had extended further down, the bumper LIDAR would have seen it and stopped the car).If the bus driver causes an accident, that is on the bus driver, not the other car, whether it is driven by a human or an autonomous system.These "theoreticals" are fucking retarded and will almost certainly NEVER happen. Meanwhile, three fucking people died in my town yesterday because they were shitty drivers.

In reply to by chubbar

CNONC tmosley Sun, 11/26/2017 - 20:32 Permalink

These "theoreticals" will happen everyday. In a big, complex world, one in a million chances are everyday events.  There will not, in the forseeable future, be a time when a large majority of cars will be self driving.  The ubiquitous white service van and the contractor's pickup, those things which transport both a man and his tools, are a large percentage of the vehicles on the road, and will continue to be.  Even if the technology is perfect, accidents will still happen, and decisions of this sort must be made.  Overreliance on technology is a foolish as ludditism.   Technology can never be relied upon completely.  Human intervention will always be required at some point in the operation all automated systems, no matter how sophisticated.  Airlines are already facing this.  The manufacturers have attempted to design human error out of the system.  The consequence has been the deterioration of pilot skills and an increase in accidents caused by pilots being unprepared to take control when automated systems failed or didn't react as expected.   Faith in AI and the advance of technology to replace the human is misplaced.  The proper role of automation is to enhance the human ability to monitor and control the power and processes we use, never to replace us completely.  We seek to control these powers and processes to achieve human ends.  It is ultimately up to a human to dedcide if the machine is performing the task consistent with these goals.  Responsibility cannot be delegated to the machine.  It must reside in the human, who must, therefore, be in control of the process.  Authority is inseparable from responsibility.  There is nothing new in this.  Automation is not new.  It is, and always has been, irresponsible and unethical to design a machine which removes the human from control. 

In reply to by tmosley

tmosley CNONC Sun, 11/26/2017 - 22:51 Permalink

Ok, you have two choices.First, you can let the status quo continue, and have retards killing people with cars continue to be such a normal event that its not even reported on the news in big cities unless it is a major accident.Or second, you can have autonomous cars which have superhuman reaction times which are so rarely in accidents that it makes the NATIONAL FUCKING NEWS EVERY TIME IT HAPPENS.Thank god it isn't up to the fucking peanut gallery.

In reply to by CNONC

CNONC tmosley Mon, 11/27/2017 - 12:56 Permalink

Error of false choices.  It is not a matter of either/or.  As I said, the future will be one of a mix of autonomous, semi autonomous, and human controlled vehicles intermingled.  Already, cars with autonomous features such as automatic breaking are in use.  We are not seeing a large reduction in accidents.  What we are seeing is an increase in cars with autonomous features being extensively damaged by being rear ended.  Some evidence exists (anecdotal evidence from body shop owners.  Not scientific, but should be considered) that accident severity may actually be increasing.  I reiterate that it is unethical to design systems wich reduce human control when the consequences of that reduced control are ambiguous or unknown.  It is not acceptable or ethical to design systems which transfer risk from one party to another.  It is not sufficient to simply reduce overall risk.  I may NOT increase the risks to another party, no matter how much I reduce total risk.  This is the problem with autonomous cars.  It is not a technological problem, but one of ethics. Technology serves human ends.  It can never supplant the human, as the human defines the purpose of technology.  It is, therefore, up to the peanut gallery, unless you propose to force the acceptance of technology on your fellow citizens which explicitly increases the risks, in absolute terms, of some people in favor of a reduction in risk, in both absolute and relative terms, of others.  Given the nature of the costs of technology, that necessarily means an uncompensated transfer of risk from the wealthy to the less wealthy.  That is not exactly a position that is consistent with the advance of individual liberty, which you often seem to advocate.  It is, instead, a position which perfectly matches that of the elites who wish to subjugate the rest of us. 

In reply to by tmosley

Troy Ounce chubbar Sun, 11/26/2017 - 19:58 Permalink

 What if...what if...the software must decide between a tree and Hillary Clinton in a wheel chair, pushed by Bill?Or will we see facial recognition algorithm together with GPS route forecasts for (family) of  bankers, corporatist and politicians excluding them of ever being hit?Can I buy into such software? This would decrease my life insurance premium with how much?Great questions for the positive thinkers among Zero Hedgers. 

In reply to by chubbar

OverTheHedge Troy Ounce Sun, 11/26/2017 - 23:58 Permalink

Could you not personalise the list? Have one of those little "move up", "move down" buttons, and create your own preference list? Obviously "Me" goes at the top, followed by, children, cats, little old ladies, etc. At the bottom, you would put "Audi drivers", Lada, Skoda and Trabant, Caravans, hedgehogs.

Would there be an "Actively attack" option? Have fun filling that list in.

In reply to by Troy Ounce