Las Vegas' New Self-Driving Shuttle Involved In Accident On Its First Day

That didn’t take long.

Two days after Alphabet’s Waymo subsidiary announced that it would begin testing a fully autonomous taxi service on the streets of suburban Phoenix, Ariz. - provoking warnings from safety groups who claim Waymo’s own data show its cars are not yet prepared to safely operate on public roads - another self-driving shuttle service being tested in Las Vegas, Nevada created a local controversy when one of its shuttles collided with a human driver. And what's more, the accident occurred on the project's first day in operation.

As the Verge notes, within an hour of starting its new expanded operation today, following a two-week pilot test back in January, the shuttle hit the front end of a large delivery truck as the human driver pulled out into the street from a loading bay. Luckily, all eight people aboard the shuttle were wearing their seatbelts.

However, a spokesperson for AAA, which is working with the city of Las Vegas and Keolis - the private French transportation company that’s been responsible for testing the driverless cars - to sponsor the program and survey driver attitudes toward autonomous vehicles, confirmed that the accident was actually the truck driver’s fault. The shuttles operate in a 0.6 mile loop around Las Vegas offering free rides.

Luckily, only the front bumper of the shuttle was damaged, and none of the eight passengers or the truck driver were injured.

A representative of the Las Vegas city government provided more details about the incident in a tumblr post published by the city:

UPDATE: Minor incident downtown Wednesday afternoon


The autonomous shuttle was testing today when it was grazed by a delivery truck downtown. The shuttle did what it was supposed to do, in that it’s sensors registered the truck and the shuttle stopped to avoid the accident. Unfortunately the delivery truck did not stop and grazed the front fender of the shuttle. Had the truck had the same sensing equipment that the shuttle has the accident would have been avoided. Testing of the shuttle will continue during the 12-month pilot in the downtown Innovation District. The shuttle will remain out of service for the rest of the day. The driver of the truck was cited by Metro.

The shuttle.


AAA, in partnership with Keolis, just brought the future of transportation to America, and now the century-old auto club wants to hear from you.


AAA Northern California, Nevada & Utah (AAA) is sponsoring the nation’s first self-driving shuttle pilot project geared specifically for the public. Over the course of a year, the self-driving shuttle aims at providing a quarter-million residents and visitors of Las Vegas with first-hand experience using autonomous vehicle (AV) technology, exposing most riders to the technology for the first time. This pilot builds on Keolis’ limited shuttle launch in downtown Las Vegas in early 2017; today’s launch will be the first self-driving vehicle to be fully integrated with a city’s traffic infrastructure.


In addition to studying how the shuttle interacts in a live traffic environment in downtown Las Vegas’ busy Innovation District, AAA will survey riders on their experience in order to understand why a large percentage of consumers remain wary of driverless technology, and whether a personal experience changes their perception. AAA partnered with the city of Las Vegas, the Regional Transportation Commission of Southern Nevada (RTC) and Keolis North America (Keolis), which will operate and maintain the NAVYA Arma fully electric shuttle.


The shuttle is manufactured by NAVYA, comes equipped with LiDAR technology, GPS, cameras, and will seat 8 passengers with seatbelts. Safety features include the ability to automatically and immediately brake in the event of a pedestrian crossing in the path of the vehicle. In addition to surveying the shuttle’s riders, AAA will examine how others sharing the streets react to it – including pedestrians and cyclists.  AAA chose Las Vegas for the launch because of the state’s progressive regulations on autonomous vehicles, heavy investment in innovation, the high volume of visitors and a sunny, dry climate that’s favorable for testing new driving technology.


How the Self-Driving Shuttle Pilot Program Works


Covering a 0.6-mile loop in the Fremont East “Innovation District” of downtown Las Vegas, the all-electric, self-driving shuttle offers free rides for people to experience autonomous transportation in a real-world environment. The shuttle is the country’s first autonomous shuttle to be fully integrated with “smart-city” infrastructure, communicating with traffic signals to improve safety and traffic flow. The shuttle is operated and maintained by Keolis, which also led the efforts to integrate its vehicle into the smart-city infrastructure, in partnership with the city of Las Vegas and NAVYA.


The shuttle can be boarded at any of the autonomous-vehicle shuttle’s three stops located on Fremont Street and Carson Street between Las Vegas Boulevard and 8th Street.

* * *

As we pointed out yesterday following Waymo’s big driverless-car announcement, driverless cars are regulated by a patchwork of state laws. Arizona, like many states, has no restrictions against operating an autonomous vehicle without a person in the driver’s seat. On the other hand, California, where Waymo is headquartered, requires any self-driving car to have a safety driver sitting in the front.

However, just because companies can legally test these cars, doesn’t necessarily mean they’ve been optimized for safety. In December, Waymo published a report for California’s Department of Motor Vehicles about how frequently its driverless cars “disengaged” because of a system failure or safety risk and forcing a human driver to take over. In the report, Waymo said this happened once every 5,000 miles the cars drove in 2016, compared with once every 1,250 miles in 2015. While that’s certainly an improvement, these types of incidents are hardly rare.


SofaPapa flacorps Thu, 11/09/2017 - 16:30 Permalink

As Tyler says, it didn't take long"Fault".  I like that.  By such a narrow definition.  In an interaction between two human drivers, I'm willing to bet this would not have happened.  Should the truck have pulled out?  No, of course not.  BUT drivers who are past their learners permit have faced this precise situation hundreds of times.  This is what "defensive driving" means.  You are scanning.  And when you are scanning, you are expecting the other drivers to do something stupid.  Because you are expecting it, you are watching that truck driver before you even get to the place he's going to pull out.  If he is even thinking about moving out in front of you, you lean on the horn.  How did you know he was going to do it?  You saw HIM before he even started the move.  These AI algrorithms are not close to that level of sophistication.So yeah, technically, it was the truck's stupidity.  But even with his stupidity, the overwhelming majority of drivers in the place of the van in this scenario would not have made physical contact.  Because we adjust to other drivers' stupidity.  This is precisely what I said just yesterday.  But the techno-obsessed refuse to see that the reason humans have a pretty good track record in operating these machines is NOT because we always stay within some set of rules.  It's precisely because we see so many subtle cues machines can't even identify, not to mention know what to do with, that we are in a constant process of changing the rules, depending on the unique circumstances.This cannot be programmed.  How long will it take before people force the techies to give this dream up I have no idea.  I hope it's soon, and I hope the human cost is light.

In reply to by flacorps

massbytes SofaPapa Thu, 11/09/2017 - 16:37 Permalink

Could it be worse?  In the US 1.3 million deaths per year or about 3300 per day.  This doesn't count the millions of just injuries.  The driverless shuttle had stopped and the truck backed into it.  Think you could have "scanned" that, got it into reverse and moved away from the truck?  Not bloody likely.

In reply to by SofaPapa

SofaPapa massbytes Thu, 11/09/2017 - 16:58 Permalink

First, where did you see that the truck was in reverse?  I'm not trying to be obnoxious here.  I just don't see that in this article.  Is there another place where the accident is covered more fully?Overall, we disagree, obviously.  Without knowing the specifics of the situation, it's impossible to come to a definitive conclusion, but I am not stating anything radical when I say that there is an arc of alternate possibilities in a situation like this other than just "reverse and move away".  Depending on the situation, and being willing to cross the lane marker - if you know there's nobody in the other lane -  might have provided the needed buffer space.  But that's not the program.  Clearly, the program is to "stop".  That's a very simplistic response.  Human drivers have a much richer universe of options available to us based on our situational awareness and our capacity for on-the-spot creativity.Regarding the statistics, you know as well as I do the more valuable information is how many accidents per how many miles driven (which another poster put up yesterday, though I'm not looking at them right this second).  Could it be worse?  We focus on the accidents because that is human nature.  But think about the millions of miles driven with no problems or - even more importantly - with accidents avoided, an you realize that humans avoid collision FAR more than we get into them.  We're remarkably good at it, actually.  Are computers really that good yet?  Based on my experience with computer technology, I very strongly doubt it.I repeat what I said yesterday: time will tell.

In reply to by massbytes

Pool Shark SofaPapa Thu, 11/09/2017 - 18:08 Permalink

Most personal passenger cars are driven approximately 1,000 miles/month = 12,000 miles/year. (That's the reason for the standard 3-Year or 36,000 mile warranties.)These automated vehicles are claimed to have accidents every 5,000 miles.That's MORE THAN TWO ACCIDENTS EVERY SINGLE YEAR!Over the last 40 years, I have driven at least 400,000 miles, and have had only 5 accidents (none serious; all fender-benders; and 3 of them weren't my fault). Over that same period, these self-driving vehicles would have had (by their own statistics) 80 ACCIDENTS!!!I'm with you SofaPapa; I'll trust my own driving skills over these automated things ANY day...

In reply to by SofaPapa

any_mouse SofaPapa Thu, 11/09/2017 - 16:50 Permalink

Artificial anything is not as good as the real thing.

AI is only as good as the humans who coded it.

There are never (/sarc) situations where computer programs do something unintended. Unforeseen inputs in unforeseen combinations. At the core is 0s and 1s, AND, OR, etc. Simple basic logic. Strung together by humans into complex code.

The control element that AI carries is dangerous in of itself. The Tech industry is rife with people who see society differently and have a desire to alter society to their notions. Prejudices.

Freedom of Movement. Freedom of Choice. Freedom of Association.

Replaced with Restrictions. Constraints.

In reply to by SofaPapa

JuliaS SofaPapa Thu, 11/09/2017 - 17:25 Permalink

Self-driving cars work best in a mesh network where they communicate with eachother. Then they can wirelessly transmit intentions to every single vehicle around, even the ones that don't share the line of sight. If a car 10 cars ahead of you notices a hazard and decides to break, every following car can do the same uniformly, while maintaining a gap, without become a rear-end train.Automation doesn't work when you mix humans in with machines. Fully automated assembly lines are deprived of humans and when serviced, those things typically have to be shut down completely. Otherwise the operator may end up a few limbs short.A piece from the original Robocop comes to mind where the OCP wheels in the robotic hand that is about to get attached to Murphy. Bob Morton shakes the hand and nearly gets his crushed by the machine.That is a metaphor for the machine-human interaction. With the machine, the protocol is all there is. For the human default action is inaction. If the human is uncertain, he stops. Biological forms are wrapped in defense mechanisms, all ensuring survival. The default state for a machine is repetition of the last instruction. A machine will keep doind what it was told to do last, until conditions change. That is, until "a change" is detected and interpreted as such. If the machine is unaware of harm it is causing, it is going to continue to do harm. The machine will not detect an upset human. It will not decide to investigate the extent of that human's frustration - his likeliness to act spontaneously. The machine will never be able to understand a human at a subconscious level.That being said, many such observations illustrate how inferior a human driver is to a machine actually. We are trained to deal with problems we ourselves are generating. A machine that causes no trouble will have no problem cooperating with other machines as long as the human factor is kept out of the equation. There are plenty of autonomous systems governing rail roard and air traffic. As long as every other object the system is accounted for, navigation is easy.Knowing how governments and the legal apparatus function - these setback will not deter automation one bit. In fact, they'll expedite the change Musk had jokes about years ago, where instead of making autopilot cars illegal, the knee jerk reaction will be to begin banning human drivers. Much like with one way streets, or HOV lanes we'll start seeing autonomous-only path reservations and before you know it, we'll be driving those mesh-networked smart cars, which will not be anywhere as intelligent as a human, but it won't matter anyway, because they'll exist entirely within their own controlled ecosystem. Once human drivers are prohibited from operating the steering, the rest will be easy. Instead of adjusting to the environment humans will do the human thing - bend the environment instead.

In reply to by SofaPapa

JuliaS Pool Shark Thu, 11/09/2017 - 18:53 Permalink

Exact opposite. Watch any of the hundreds of videos on youtube with multi-car pileups or black ice on highways where you see one car after another slide off into the very same ditch, even while the rest of vehicles are in plain site. The next person always thinks he's smarter than the rest, ending up just like them.An autonomous car would be programmed to avoid repeating patterns resulting in error. If the first car in a column executes a maneuver that results in triggering of its collision accelerometer, then every subsequent vehicle preparing to execute the procedure will halt.Additionally, bad driving habits of autonomous vehicles are easy to program around. One mistake made by one vehicle is fixed so that no other car anywhere ever repeats the error. Try doing that with a human driver. Imagine someone getting drunk, getting behind the wheel, killing a person and then that "lesson" resulting in no other driver ever getting drunk and hitting anyone.With human drivers, lessons are non-cumulative. Each person goes through their own learning process. Self-driving cars will all be learning as one entity. Every software patch or hardware specification arising from experience will result in a permanent fix for every single car on the planet.We're not there yet. We don't have data standardization yet, but when it goes full mainstream, the type of operation will be as typical as the look of a standard automobile. All brands and kinds out there, yet most have 4 wheels, 4 doors, 2 headlights, 1 exhaust pipe etc.When standardization reaches autonomous driving, many problems will go away.People will complain about self-driving cars, but then they'll simply forget about past problems and accept it.Like this conversation that you won't recall in the next 10 minutes. You'll get in the passenger seat of your auto-thing and won't even recall the last day you saw a person behind the wheel. Like the cell phone addiction - it'll come out of nowhere, it'll sweep the globe and it'll appear as if it has alway been that way.Profits will ensure transition happens regardless the state of the techonology."If human this, if human that..." doesn't actually matter, as we aren't the ones calling the shots. The car and law makers are the ones who are making the transition happen. The corporations employing thousands of drivers and trying to increase profits by slashing salaries - they're the ones putting self-driving cars on the road. They won't stop until there are no human drivers left, because to them humans are a liability - not because of how they drive, but because of how much they're getting paid.This is one of those revolutions that is done for someone else's benefit, but presented to the general public as something they themselves wanted. No arguing that there are people who will indeed benefit - those with no driver's license. Too young or too old to operate a car. Maybe someone's injured. Maybe someone who used to drive got crippled in a crash. Maybe someone wants to have a few extra drinks on Friday and not worry about getting home. Maybe someone wants to blow up a building... not wondering if they wired the suicide vest correctly and would rather have the Johnny Cap do it for them.Like with any disruptive tech, I'll create its own set of winners and losers, victims and abusers. All we can do now is watch. Transition will happen whether we like it or not. If there's money to be made, sure as hell, it'll happen.

In reply to by Pool Shark

ElTerco JuliaS Thu, 11/09/2017 - 18:42 Permalink

"That being said, many such observations illustrate how inferior a human driver is to a machine actually."

This statement is utterly ridiculous. You are saying that thought and judgement are inferior to a system that can only follow narrow prescribed rules.

Rules are fine in a closed system, but in the real world, they are not sufficient.

In reply to by JuliaS

JuliaS ElTerco Thu, 11/09/2017 - 19:11 Permalink

"In the real world". Here we go again!So, trains that run themselves, planes that pilot themselves, Amazon warehouse drones that stack shelves - they're not part of the "real world"? Do different rules apply, or does "real world" mean "my view that doesn't match yours"? Does the presence of rules violate so-called reality?Am I saying that human judgement is inferior? Yes, I do! A human driver deals with issues produced by other human drivers, which is an analog art never perfected. That's why the concept of "high collision intersection" even exists. Have you seen the signs? They might as well read: "Humans are stupid and will likely do thing we're telling them not to do, despite us telling them".That superior human mind!This type of conversation typically escalates to more and more intricate situations people throw at me along the lines of: "Can a self-driving car do that?" and at some point I respond with: "... can you?"I've been a guidance system developer for early auto-driving cars many years ago. Participated in the DARPA challenge regularly, back in the days where you had to get a minivan just to have enough space for the electronic guts it took to operate a vehicle. We've come a long way and...... guess what. Unlike human driver who is unlikely to ever evolve beyond the current biological capacity, a self-driving car can progress as far as it has to.How many eyes and ears a human has? How many brains? How many arms and legs? You know the answer.How many lidar domes, radars, sonars, accelerometers, gyros, GPS radios or lines of code a self-driving car can have? The answer is - as many as you choose to put in.There is no limit to how good a self-driving car can and will be. This is what it can do today. This is what it will be able to do tomorrow. This is what it will do 2 days from now. That's how it should be interpreted. A self-driving car can become infinitely better, unlike a human operator who is already as good as he or she will ever be.Any "situation" a self-driving car finds itself in will become a code template. Sure they'll be plenty of situations, crashes, accidents and growing pains. What will make a huge difference will be the frequency of repeat offenses.A high crash intersection will remain a high crash intersection forever, when human drivers are involved, unless the rules are changed (the non-real world thing you're referring to). In an autonomous world a "high crash" intersection stops being such after the very first crash. To me that is superiority.

In reply to by ElTerco

JuliaS ElTerco Thu, 11/09/2017 - 22:51 Permalink

Roads are a controlled environments. That's why we have rules and road signs. That's why we have standard vehicle features. That's why we have rear view mirrors. That's why we drive on the designated side of the road. Those are all features designed to deal with the fact that although humans could theoretically improvise their way around problems, they can't easily deal with abundance of inputs. We can't pay attention to everything going on and crashes 9/10 times happen due to lack of attention or failure to communicate intent, such as not using turn signals in a timely fashion, or driving aggressively.2 biggest causes of car crashes do not apply to machines, as a machine has (like I said), no limit to the number of sensory inputs it can juggle. A car can't be sleep deprived. It can see infrared, ultraviolet, lidar. It can process ultrasonic bounces. It can detect magnetic fields - it can have any sensor imaginable to see and anticipate things that a human will never be able to witness."You don't see children running in warehouses" you say in response to my comment. Well, they shouldn't be running through warehouses, much like they shouldn't be playing in traffic. That applies to dogs as well. If and when they do get out of the road and do get hit/killed, where is your so-called driver superiority? Why are you confident that a machine won't be able to do what a human does?What makes you think a machine cannot do anything and everything a human does? What makes an infinitely expandable system inferior to one that hasn't had a real "processor upgrade" since the time it crawled out of the ocean and learned to walk upright?

In reply to by ElTerco

ElTerco JuliaS Thu, 11/09/2017 - 23:38 Permalink

Because a human being knows that a small object on the side of the road that is not moving is a small child, and not a fire hydrant.

Furthermore, we have human context that can predict what that small child is likely to do.

Finally, we can predict further than 0.5 seconds into the future what sort of human actions are likely to occur in the road up ahead. For instance, cross traffic, like that truck that pulled out as described in this article.

In reply to by JuliaS

JuliaS ElTerco Thu, 11/09/2017 - 20:08 Permalink

I didn't get my education in this country, so that comment doesn't apply, though I do agree with the rest of the statement.I'm not an idealist. I say that accidents will happen on the way there, but we'll get there anyway, because the force behind the transition is bigger than any of our personal opinions.Autonomous cars will make mistakes. What will make them different from "superior" humans, is that every mistake will only be made once, never to be repeated again. As humans we learn individually. Autonomous tech is "one entitiy". It is one driver that learns through all the cars on the road and though all of the data accumlated since inception.An human accident half the planet away doesn't make another human smarter. An autonomous car can learn from any other car instantly.Just the ability to communicate intent alone worth everything! Why do we have turn signals? Because we can't wirelessly transmit our intention to turn to other drivers. A car can. It can broadcast its intention to turn to every other car on the planet if it wants.We need rear view mirrors. A self driving car can watch everything at the same time in all spectral forms at any temporal resolution you want. A car that sees an obstacle can transmit the exact definition of the obstacle to the car directly behind.Let's take your beloved dog and the close call I've had a few times on the road myself, when an animal would jump in front of me unexpectedly, or when I was following a car which would suddenly sway out of the lane to reveal a rapidly approaching pot hole...I could've had bad accidents, but I was lucky to have been paying attention in each case with just enough time to react. I've witnessed many cases when people in similar situations did not react and did hit either cars or living things.A self driving car which would observe an obstacle would be able to transmit its exact position and velocity down the chain, so that every following car would sway out of the lane in unison. That is how it will be done. That is how it is done in truck fleets that are being tested right now for autonomous driving. The call it "caravan mode". The foundation of autonomous driving ethics is being laid.We're still years away from full standardization. Big automakers (Mercedes, BMW, Toyota etc) have to get on board first and agree on point could formats, ping intevals and other technical nuisance.Once self-driving cars hit roads en masse (and they will), major problems will be weeded out within a few short years. It is much easier to write code than to educate people and driving doesn't take mental sophistication.For proof all you have to do is look at the kind of people trucking industry employs. Let's just say those "superior" humans aren't exactly a PHD material. Of all the human occupations this one will be the easiest one to automate away.

In reply to by ElTerco

SofaPapa JuliaS Thu, 11/09/2017 - 21:35 Permalink

Okay, so I've read about half the material in your comments, and I see where you are coming from.  I also have thought that this would be the only way a system such as this could even potentially work.  You would have to have a 100% automated fleet.  So let's say that's where they're going with this.  The trouble with that is precisely the same effect that has been discussed ad nauseum on this board regarding centralized trading markets.  Yes, when everything works, it all works.  When something breaks, however, the opposite is also true.  In such a linked system, an error will propagate instantaneously, and when that happens?  I don't even want to think.  We're talking hundreds of thousands of people in their cars at a time.  Even a percentage of that can become one hell of a nightmare really fast.  Every one of us has experience with computers.  We all know they do some really weird shit sometimes, even ignoring what so many have pointed out regarding hacking.  I want something like that to happen to the entire fleet of cars?!  WAY too much concentrated risk for me, thanks.  I like my risk distributed.  It gives me (the individual, not the member of the collective) much better control over my safety.Keep this automated nirvana for someone else.  I'll still trust individual interaction and play my odds of my share of near-propagated mistakes rather than the odds of a single catastrophic mistake in some remote control center that I trust as much as I trust our loving Silicon Valley friends at FB, SNAP, Google, etc.   This kind of "engineering world" is not only intellectually sterile and represents the living hell of boredom with no individual agency; it also carries the risk of killing many in a single swipe.  I don't see a place for what I consider  "humanity" is a world like that.  I'm out.

In reply to by JuliaS

JuliaS SofaPapa Thu, 11/09/2017 - 22:54 Permalink

Agree with everything you say and I share all the same concerns. This is no overnight transition, simply due to the fact we cannot physically go to a different transportation type all together and I don't foresee an unconditional conventional vehicle ban. What I do see is a gradual conversion from easy to automate fleets with greatest cost-saving potential (commercial highway traffic), being replaced first and consumers being reached last. Insurance and law structure will determine how fast people jump ship.At the end of the day - this is not a consumer driven change. Businesses want this more than we do and they are the ones making it happen. That means, that the big money interest will be willing to overlook or loophole their way through the transition phase when most problems will emerge and most accidents will happen, but well within our lifetimes they'll get to their end goal - where no one owns a car and everybody rents the right to ride in privacy.To an average consumer today a bus is a self-driving car. You get it, it takes you places. You don't step on pedals or turn the weel to do it. Psychologically we're fully adjusted to automation already.Centralization of transportation brining a systemic failure risk? Sure! That's inseparable, as with centralized power, water, gas, internet, telephone etc. Services still get cetnralized regardless, due to savings.Self-driving cars, no doubt, will, be easier and cheaper to have on demand than to own a car. If offered with insurance rates to match - people will go for it with no hesitation. People are already trusting their lives with this barely-even-ready tech. Now imagine what the conversion ratio will be when the tech actually gets good, even marginally better.People are far far from being adequate when it comes to driving. Our brains aren't built for speeds exceeding our natural rate of motion. The only reason we navigate without going crazy is precisely because the road is a controlled and simplified environment. The reason we have traffic lights is because unlike a computer we can't pre-calculate trajectories to figure out if we're going to pass or hit another car going perpendicular. The reason we can drive with other cars closeby is because our relative velocitcy remains within our comfort zone and cars travelling in the opposite lane don't cross into ours. We may be doing 60+, but the mind treats it like we're walking.Everything about driving illustrates our ineptitude that is very easy to surpass through mechanized methods. Cutting through red tape is more difficult than coding a navigation template.

In reply to by SofaPapa

ElTerco JuliaS Fri, 11/10/2017 - 02:53 Permalink

Self driving cars will make mistakes that a human would *never* make.

Why am I sure of this? Because the computational power required to emulate human reason on dozens of the best pre-market GPU cards running in parallel potentially takes seconds to solve a problem that a human would instinctually "know" the answer to in a split second, in a complex environment.

Don't tell me I'm wrong, because it was my job before I retired to evaluate algorithms vs computational hardware capabilities. I won't deal with your bullshit.

If what you are saying is true, we would have no need for fighter pilots during war. We could let the planes fly themselves, dogfight optimally, and bomb their targets with the lowest possible collateral damage.

In reply to by JuliaS

JuliaS ElTerco Fri, 11/10/2017 - 14:24 Permalink

The reason we have fighter pilots is latency, sensory feedback and cost. Glad you bring that up, because nowdays we use drones for things we used to have pilots for. Precisely the transition to autonomous that we're contemplating. It is happening, whether you like it or not. The question is only how quickly the transition happens in the civilian market. Military is a full generation ahead of the game. The conversion is happening rapidly - as quickly as adaption of an internal combustion engine when it was invented.You developed algos? Neat! So did I (and still do)! I developed lidar signal processing utilities and data visualization systems for sensors in general. I designed electric cars and later entered engineering contests, prototyping self-driving vehicles under the DARPA initiative in late 90's, back when all the processing was done on P3's and SPARC's. The concept of GPU didn't even exist and we had cars go through closed track obstace courses like nobody's business with less than 1/10000 of today's processing power.Processing speed you cite is a non-issue. GPU takes seconds to come up with results? What are you smoking? Most systems on the road today update at least 240 times a second. ABS brakes, for crying out loud, update 40+ times a second. There is no question that reaction time and "attention" of an automated system is already well surpassing human limits. All that's left is the software. That is all there is. Just the code. The problems aren't abstract - they're very specific and quantifiable as well as the amount of time it'll take to finalize solution. Standardization is the next step, following the brief automation arms race where the automakers aren't yet cooperating with eachother and use proprietary code.We already have self-driving things. The only thing we're discussing is what gets automated next and how quickly. Not if. When.

In reply to by ElTerco

ElTerco JuliaS Fri, 11/10/2017 - 17:32 Permalink

"Most systems on the road today update at least 240 times a second. ABS brakes, for crying out loud, update 40+ times a second."

This quote addresses the reaction based stuff, not the deep learning reasoning capability.

The reasoning capability would address things like I mentioned above such as an immobile small child on the side of the road.

In reply to by JuliaS

JuliaS ElTerco Fri, 11/10/2017 - 18:07 Permalink

Let's put it simply - anything a human can do a machine can do and once the machine surpasses human skill, the progression doesn't stop there.Driving is a simple task, not a complex one. It is largely guided by reflexes with little reasoning. When people describe a hypothetical situation where the car has to choose between hitting kid A or kid B, what about the human driver? They say that a machine cannot be trusted with moral decisions. Can a human be trusted? When he hits one of 2 kids, is he automatically right, while the machine is wrong? Of course not! He's throwing dice. Sometimes he  makes a small error, sometimes he make a greater one and goes to jail. There is no such thing as a superior human mind, especially when it comes to drivers. That is, I'd argue, the lowest form of reasoning and the lowest form of skill, which is precisely why we have so many people on the road and daily accident counts measuring in thousands. Not only do people repeat mistakes of others - they repeat their own ones, until the license gets taken away.People avoid close calls on the road not because they intelligently weigh every possible outcome (as machines actually do), but via knee-jerking. There is no deep reasoning. You are describing things that don't exist. People are shitty drivers. The mind isn't built for highway speeds and it's not built for multitasking at such velocities, which is, for the third time, why the roads are controlled environments with strict conditions that drivers violate periodically, forcing other drivers to anticipate and adjust. That's a proof of inferiority, not superiority.A machine can and will do a better job, because there is nothing preventing it from doing absolutely everything a human does and then some. There is no "sense" that cannot be replicated. No instruction that cannot be put into code. There is nothing - absolutely nothing that cannot be automated away. That's the scary part - not the fact cars are going to start driving themselves tomorrow, but the idea that it's not going to stop there. We are slowly being made obsolete and automation is already here. Vehicles already fly, haul and dive themselves. More automation is expected tomorrow and the day after that and the day after that. Self driving cars are already hitting the roads and there will be more of them - not fewer, every day, every week, every month, every year. Instead of redundantly discussing the inevitable, the best we can do is contemplate where we will position ourselves personally - what the transition will do to the way we live and work.

In reply to by ElTerco

Deres taketheredpill Fri, 11/10/2017 - 04:53 Permalink

This is the same issue than one of the accident involving a Google cars. Big vehicle consider they are prioritary because they are frightening small vehicles. Big vehicles driver are accustomed to the fact the other frightened users let them pass most of the time if they force a little. ... For the Google car, it was with a bus that considered the Google car would not have the guts to change lane before him.

In reply to by taketheredpill

Deres taketheredpill Fri, 11/10/2017 - 04:53 Permalink

This is the same issue than one of the accident involving a Google cars. Big vehicle consider they are prioritary because they are frightening small vehicles. Big vehicles driver are accustomed to the fact the other frightened users let them pass most of the time if they force a little. ... For the Google car, it was with a bus that considered the Google car would not have the guts to change lane before him.

In reply to by taketheredpill

Endgame Napoleon Bill of Rights Thu, 11/09/2017 - 16:48 Permalink

Wonder what the underwriting code is. Wonder what the unlicensed mommas, working in a call center when not engaging in excused and back-watching absenteeism, who sell the policy to the robot driver list as his/her/its profession? Is it a safe profession, fetching a discount for the robot driver? .... Is the robot driver married and middle-aged, with a lower rate?

In reply to by Bill of Rights

Mind the GAAP Thu, 11/09/2017 - 16:06 Permalink

I think I'll load up on Waymo short positions, buy a $500 Buick beater, and start driving around Vegas looking to smash into those ugly-ass vans.Should be able to clean up handsomely after the ensuing PR carnage.