Uber Tragedy Shows Autonomous Cars Still Have Far To Go
The most significant thing to have happened in the automotive world this week is, without question, the death of a woman at the bumper of an Uber prototype autonomous car; the first such fatality ever recorded. With that, 49-year-old Elaine Herzberg takes her tragic place in history alongside Irish scientist Mary Ward.
Who’s Mary Ward? We’d be willing to bet that 99.9 per cent of people, CTzens included, would have no idea until they Googled her. She was the first person to be killed by an automobile, full stop. On 31 August 1869, near Parsonstown, as the town of Birr was known then, she was riding in a steam-powered car built by her cousins when she was thrown from her seat, fell in front of one of the wheels and was killed almost instantly after sustaining severe head trauma and a broken neck.
Today, as I’ve just outlined, to most people Mary Ward is nobody. She’s not even as famous as the Z-listers TV researchers keep digging up for every new series of I’m a Big Brother Dancing on Celebrity Strictly Bake Off, and yet she holds a unique place in car history.
Elaine Herzberg, too, will soon be forgotten. This poor woman, who was reportedly homeless at the time of her death, will be lost in the commercial tides pushing autonomous cars ever closer to reality. Another Mary Ward, collateral damage in the turbulence of progress.
According to a San Francisco Chronicle report, footage taken from the Uber autonomous car showed that Ms Herzberg pushed a bicycle laden with plastic shopping bags out into the road in front of the car. We’ve since discovered that the car simply didn’t see the obvious obstacle.
Arguably the most wonderful thing about the human brain is its capacity to deal with infinite variables. A focused human brain can analyse a driving situation in ways a computer simply can’t, and probably never will be able to. Despite the darkness the Uber car - and the human backup, who was sadly distracted -absolutely should have seen Herzberg crossing the road, if the technology worked. But it didn’t, so we can only assume it doesn’t.
If a self-driving machine fails to see an impending accident, people will die. If we can’t be completely sure the systems are foolproof, how can we ever trust them? Would you trust a mechanical, automated nanny with your baby son or daughter if you knew it might occasionally, if accidentally, try to kill them?
I’m not suggesting humans are safer. Far from it. The science that predicts a massive drop in road traffic deaths when autonomy becomes normal is no doubt spot-on. What I’m saying is that autonomous technology isn’t good enough, yet. It’s nowhere near. At the moment all it takes to upset the whole system is a pothole, rays of sun at the wrong angle, dirt on a sensor or darkness. Various prototypes testing on public roads have had various inexplicable faults, like slamming on the brakes at a green light. That in itself could cause a huge accident – and the machine would be to blame.
Just as we accept the risk posed to us by human drivers getting it wrong, we have to accept the risk of imperfectly-programmed machines getting it wrong, too. That said, I will always prefer the task of anticipating what a fellow human might do, as opposed to what a machine will do – or not do – when its software is momentarily compromised.
Maybe a fully-focused human driver could have anticipated Elaine Herzberg’s movements, or maybe not. We may never know for sure. It’s clear that there’s a lot more work needed before all the kinks are ironed out of self-driving cars. Just as it always has been, progress is subjective.
Comments
At this point we might as well slap on L plates on the autonomous vehicles and treat them like leaner drivers since it’s impossible that the car will get it right the first time.
Need some A plates haha
Slight off but relative topic - when the news broke out that the last male Northern White Rhino had gone extinct and people suggested that science could save and bring back the species to life, scientists ruled out that it’s not so easy or efficient to bring back a dead species because according to them, we still haven’t fully understood the human biology fully well let alone an animal species.
It’s somewhat similar with the autonomous cars. Teaching it what the human brain perceives is literally very hard when we ourselves don’t know how the human brain works well enough.
Next we have the cameras and sensors. Sure the LIDAR system is very advanced but even that managed to miscalculate a human with a bicycle and some bags hug on to it. So did the sensors. Then we have these cameras which to me look like a college project approach. A camera that’s usually found in a mobile phoneor DLSR stuck on to a car? Really? Those things have very limited resolution to what a human eye can see (which has the equivalent of 576 megapixels).
I’m not suggesting that we abandon it but rather take a more non cynical and logical approach to solving a problem. Rather than marketing as a breakthrough in the name of competition, I’d rather the researchers have fully study what and how the human brain sees and understands and then replicate that into a system. Perhaps that would be a more logical approach.
Why downvotes?
Also as a guy who writes code and programes, I’ll be honest with all of you guys. There is no such thing as a working softwere. Every softwere has glitches and bugs, some are smaller some are bigger. But If those bugs can kill somebody, and there IS NO WAY of making a bugless code, there is one option left. And that is to use human brain, not computers to operate our cars
The resolution is not necessarily the main issue in this situation, t’was the cameras low light capabilities, in a lower light situation having something like 1080p or 4k resolutions would make no difference to perceived image quality because in low light the human high relies on rods which have low visual acuity (the ability to define objects clearly) this is why teddy bears in a darkish room were so damn scary when you were a kid, the rods are so bad than in extreme cases it isn’t immediately obvious if an object is moving or not. So to say the camera is insufficient because of it’s resolutions probably isn’t true, while a higher resolution can’t really hurt it probably wouldn’t help much either, a better solution would be to implement a sensor that detected something other than light, it would be a lot better for mapping out the path in front than relying on a camera.
This is why level 3 and 4 driverless technology has NO place on our roads. We cannot count on a human to react to situations when needed. Until we can perfect full level 5 autonomy (probably at least 20 years away), stick with level 2 please. Use your 86 billion neurons.
Nothing new there lol it’s a long long long long time away yet.
So in the future when all cars become autonomous….. will there be such thing as a driving licence anymore? Will humans have to take a driving test?? #MindBlown
All car mags seem to have taken this opportunity to shit on autonomous vehicles. The woman was an idiot for crossing a dual carriageway in front of oncoming traffic in pitch black wearing nothing bright or reflective that would’ve identified her to either human or machine.
Thank you. It seems like no one sees this fact. Cross at night in the middle of a road wearing dark clothing. I doubt a human could have seen them let alone a robot
Well, to be frank, what you see in a dashcam video isn’t necessarily 100% what you’d see on your eyes.
I’ve watched it multiple times. In my personal view, if the driver was looking at the road a few seconds before the collision, there may be an opportunity to apply heavy braking and steer away from her. Emphasis on the “may” because many factors are at play.
To me, a collision is likely unavoidable, so I’d see if there was any way to mitigate the impact as much as possible. Trying to steer away while applying full brakes would probably help.
Shhhhhh…..Be quiet, if we let people be stupid, this will be a hurdle and a setback for self-driving cars, we don’t want them to know that human could not have stopped in time to avoid collision either.
Of course, it’s inevitable. Any new technology has it’s flaws. Imagine the first time people tested cars. They do get solved and that’s why we have them.
I just wish that autonomous cars never happen. I may write a post about it today.
Comparisons between autonomous car crashes and the general population of human drivers are not fair comparisons.
Expert drivers have been involved in the programming of autonomous car computers, yet the software still relies heavily on human-orientated assists like ABS and ESC, just like an average driver. People who have been trained to a higher level, like police officers, are almost never involved in accidents that require stopping and/ or steering quickly.
When people are trained to drive properly, they are superior to computers. The answer to reducing accidents is driver training, not a (job-stealing) computer-driven car.