A manager at Uber warned bosses at the company of the risks associated with self-driving cars just days before a cyclist was killed when she was hit by an autonomous vehicle the company was testing as she crossed the road in Arizona.
Elaine Herzberg, aged 49, was wheeling her bike across a street in Tempe on the evening of 18 March when she the vehicle struck her, causing fatal injuries. The back-up driver in the Uber vehicle, Rafaela Vasquez, was reported to have been watching an episode of The Voice on her phone.
According to The Information, an Uber manager involved in its autonomous vehicle unit, which is using technology to develop so-called ‘robotaxis’, warned the company’s senior executives in an email of risks inherent in the programme.
Those included faults in the software used to self-drive the Volvo SUV used in the trial, as well as criticism of the human back-up drivers including the lack of adequate training given to them and lack of focus some had on their jobs.
The manager, Robbie Miller, who left Uber shortly afterwards, outlined a number of specific incidents in his email and said: “The cars are routinely in accidents resulting in damage.
“This is usually the result of poor behavior of the operator or the AV technology. A car was damaged nearly every other day in February. We shouldn’t be hitting things every 15,000 miles. Repeated infractions for poor driving rarely results in termination. Several of the drivers appear to not have been properly vetted or trained.”
Prior to joining Uber, Miller had worked for Google’s self-driving car operation, now known as Waymo, and which just last week introduced the world’s first robotaxi service in – coincidentally – Arizona. Uber itself shelved plans to roll out a similar service in the state after Ms Herzberg’s death.
He claimed in the email that the response to concerns over safety would have been very different at his previous employer, writing: “At Waymo I would not have been surprised if the entire fleet was immediately grounded for weeks or longer if a vehicle exhibited the same behavior.”
The Information said that a number of past and present Uber employees had vouched for the accuracy of what Miller said in his email, and it has previously speculated that improper tuning of safety software may have contributed to the crash.
In a statement, Uber – which has since suspended on-road testing of autonomous vehicles and has shifted its testing from Arizona to Pennsylvania – told The Information: “Right now the entire team is focused on safely and responsibly returning to the road in self-driving mode. We have every confidence in the work that the team is doing to get us there.
“Our team remains committed to implementing key safety improvements, and we intend to resume on-the-road self-driving testing only when these improvements have been implemented and we have received authorization from the Pennsylvania Department of Transportation.”
While Miller did not receive a direct response to his email before leaving Uber, he was reportedly told that the situation would be reviewed and, according to The Information, some of the issues he raised were incorporated in a company review compiled after the fatal crash.
In 2016, we reported how Uber was aware of a flaw in the way its autonomous vehicles crossed bike lanes prior to a trial being launched in San Francisco.
The company said that it had instructed back-up drivers to resume control when they approached intersections with bike lanes.
But San Francisco Bicycle Coalition spokesman Chris Cassidy said: “The fact that they know there’s a dangerous flaw in the technology and persisted in a surprise launch shows a reckless disregard for the safety of people in our streets.”
> Uber self-driving cars making unsafe turns across cycle lanes – and firm knew before launching live trial
Add new comment
13 comments
...and Uber are resuming testing on public roads again: https://arstechnica.com/cars/2018/12/uber-resumes-testing-self-driving-c...
So the 'backup driver' (as Uber appear to call the human driver) gets bored. Who wouldn't if all they get to do is watch a car drive itself all day? Even if the human did not resort the watching The Voice to relieve the tedium (I didn't think The Voice would help with that, but hey ho), how is a bored silly person supposed to be able to regain focus and react quickly in an emergency situation in any case? Wouldn't it be better if the human did the driving and the computer just watched? Then if the human has a heart attack, seizure or sneezing fit, it could take over. Computers don't get bored and can react very quickly. Win win. But that would be sensible, wouldn't it? Instead, we're going to get computer-driven cars whether we want them or not.
With any luck, the financial cost to Uber will be so huge that they have to radically reappraise their approach. The US legal system for dealing with corporate responsibility for death seems to result in absolutely gigantic compensation and punishment costs, in the hundreds of millions of dollars.
Hopefully, the incompetent managers who created this failing system will also be prosecuted and punished.
Uber settled the civil side very promptly
https://www.reuters.com/article/us-autos-selfdriving-uber-settlement/ube...
(includes pics of the damage to the car )
as to Uber and corporate responsibility and the possibility of criminal charges pretty sure Uber senior management understand there is a whole lot of difference between breaking some municipal laws and being corporately responsible under criminal law for a death and no doubt many lawyers hours have sweated it out
disruptive redefines what laws to choose to obey ... if this was non - automotive I'd bet on corporate manslaughter... but not in this case
and cheers to TedBaker for good input edit AKA TedBarnes
6. Person should also not walk across the road in front of cars? Outcome would probably have been the same with a human driver if the other human does something daft like wandering into the road when it's not safe to do so.
Without knowing any of the details, you immediately assign blame to the pedestrian. Is totally unfounded assumption your specialist subject?
As far as I can tell, the footage from within the car at the time showed the 'driver' dither/panic over taking control.
Looks to me like the added complication of deciding whether to wrest back control from a machine and THEN emergency stop made the driver just completely fuck it up.
And having a machine drive you in the first place surely lulls you into a false sense of security more than driving yourself.
'Automation' with human back-up: worst of both worlds.
The car "saw" the pedestrian, i.e. its sensors did detect her. It was the software that didn't know what to do with the information.
It was a well lit junction. Initial video put out by Uber was very dark, but several people drove the same road at the same time in the following days, and the lighting levels were much better. To the extent that IMHO, the video was basically a deliberate lie aimed at suggesting that she appeared in the road out of nowhere.
Except that she hadn't only just stepped out in front of the car. She was in the road for a long time before it hit her. It's a very wide junction - from memory, it's 4 lanes wide at the point where she crossed and she had crossed the first 2 lanes and it hit her on the passenger side, i.e. she had crossed most of the width of the car too, and hence had walked across nearly 3 lanes before the car hit her.
Uber had chosen to deactivate the Volvo's inbuilt emergency stop feature. That alone would probably have avoided this death. Uber did this as they felt the Volvo feature "interfered" with their software.
Uber chose to run cars with only 1 safety driver, who was expected to both monitor the car's software and take over in an emergency.
Uber apparently decided in favour of smooth rides over safety - earlier versions of the software were quite jumpy, and frequently applied the brakes. Uber chose to basically dial down the safety limits/cut offs, and hence even though it detected an object, but it wasn't sure if it was real or a false positive, it just ploughed on regardless.
Notwithstanding all those decisions, Uber also decided that, even if the software was unsure or confused about an object in the road, there didn't need to be any sort of emergency alarm or other notification to the safety driver saying they needed to take over. (Depressingly, I'm not making this up)
Finally, it would appear that Uber took no action whatsoever to ensure that the safety drivers actually paid attention. That's presumably the case, otherwise it would be a huge coincidence that when the software made a huge error, the safety driver just happened to be watching tv on her phone. (NB - I understand others like Google require safety drivers to store mobiles in a locker and simply don't allow them in the car)
So basically, yes, she was crossing the road with an oncoming vehicle. However, it would have taken minimal action by a driver to alter course, slow down marginally, etc...
Plus, I am a firm believer in not assuming the death penalty is ok for people who make errors of judgment.
you've got recent form with your victim blaming, the pregnant women who was crushed was your latest.
A video taken subsequently by another person who uses that road and drives a similar (maybe same) Volvo proves that the lighting of the original was either dulled purposely or was totally inaccurate. Also the braking by the car took 5 full seconds after it had recognised there was something in the road, Go drive at 40mph with your eyes closed for 5 seconds and get back to me champ!
A human if driving to the conditions and obeying the law would still have being able to see and react to stop, at the very least slow and steer around. Niether of which happened, it was also a designated crossing point from a cycle lane!
You need to sort your shit out!
Wasn't that what the police specialist in the Mick Mason/Gail Purcell said was a reasonable reaction time, IIRC...? Seems maybe it isn't
Did you pay any attention to this incident when it was first reported? Nothing you say here has much basis. It actually seems a bit like power-worship - the instinctive siding with the more powerful party in any conflict.
People often 'walk across the road in front of cars', it's called "crossing the road". If someone misjudges it, drivers, human or otherwise, should be expected to slow down or just steer around them.
The point about Uber reducing the sensitivity of the collision-detection in order to avoid constant false-positives (like a plastic bag blown across the road) just reinforces my doubts as to whether autonomous vehicles will ever really be ready for the real world.
It seems possible that when it comes to that parameter, it might turn out that no 'sweet spot' exists between 'too inconvenient for the car user' and 'dangerous for anyone outside the vehicle'. Given the power of the corporations involved I'm apprehensive as to how this conflict might be resolved.
Also it yet again emphasises the huge dangers of anything short of full automation. Humans aren't good at 'monitoring' a machine, because it's a very boring activity.
That's why, as someone with little interest in cookery, I burn stuff when I'm using the oven (really must get round to reading the instructions for the timer - turns out that relying on the smoke-detector isn't a great alternative).
I don't think any conclusions can be drawn from how Uber was dealing with this. It's a long story, but a guy called Levandowski was a (the?) head engineer at Google's self driving team, now a separate branch/company called Waymo. Levandowski (allegedly..) basically stole thousands of google docs on the self driving project and left to start up his own company (Otto). Almost immediately, Otto was bought by Uber for something like $650m. There was evidence Levandowski was in communication with Uber execs before he left Google. There are reports Levandowski was frustrated with Google's approach - he thought google/waymo was being too cautious with its development and testing.
It seems that he shook off any residual cautiousness when he was working at Otto/Uber. While I think Uber had fired him by the date of this fatality, it seems he had already established a poor safety culture.
None of which says that you're wrong, and Google/Waymo has been at this a long time. But I like to think, perhaps naively, that they are being careful rather than it is an unsolveable problem.
One of the reports that gives me some confidence in the google approach is that apparently very early in their programme, they tested people's level of attention when monitoring a vehicle that was doing most of the driving, but where the human had to be able to take control in an emergency. Apparently the results were so poor they decided that realistically, only full automation would be safe.
Still worrying that we're basically leaving these companies to it though, with very little real oversight...
1, Systemic failures
2. Slap on the wrist, don't do it again!
3. Oh, you did it again? stern finger wag this time!
4. Complete lack of a safety culture at Uber
5. An episode of a reality TV contest is more important than a person's life
Why am I not suprised?