I try to keep up on autonomous vehicle (“AV”) development as it interests me and because son Ed is quite involved (Communications Director of PAVE (Partnership for Autonomous Vehicle Education)). This video came up on his Twitter feed this morning, and I decided to check it out.
JJRicks, who has documented a number of Waymo rides on YouTube, had his driverless taxi go rogue, and it’s all documented in this video, the relevant part starting at 12:00. It’s pretty embarrassing, both for the technology as well as the poor rider support, which was not only unable to get the taxi back on course, but actually put it into a much more exposed and potentially dangerous situation, blocking both lanes of a parkway because of some construction zone cones. Why?
There are two very distinct approaches to driver assist and autonomous vehicle systems: Everyone other than Tesla, and Tesla’s. Waymo’s (and others) approach is based on a “stack” of rather rigidly applied functions that are programmed, and are heavily dependent on high definition maps and geo-fencing (limiting to certain geographic areas that are well-mapped and documented). Tesla’s very ambitious approach is based on the assumption that AI (artificial intelligence) will steadily improve, and its system is designed to work in as many situations as possible, which is rapidly expanding on a regular basis.
What happened here with this Waymo taxi is that it encountered a blocked lane (cones) to the right, and it refused to make a right turn using the available left lane. The reason this happened is because it’s still utterly unable to cope with an unexpected event like this. Waymo, which operates in Chandler, Arizona, has to constantly update its system for construction zones like this, but apparently this one was not input for one reason or another (there is a specific reference to that situation by a Waymo person in the video).
Eventually the Waymo is coaxed to make the turn, but then stops at the first cone it encounters. Frankly, this is pretty embarrassing, as dealing with cones (moving into the other lane of open traffic) is considered pretty elementary. I did not realize just how dumb Waymo Driver still is, and that every construction zone has to be entered into its software so that it knows what to do when it encounters it.
And all this is happening in Chandler, Arizona, a suburb of Phoenix noted for its very easy grid of streets and highways. How far would a Waymo taxi get in New York or New Jersey? And Waymo Driver has consistently been held up as the most advanced AV systems.
It all points out that AV development has quite a ways to go, and that the “trough of disappointment” is a lot deeper and wider than may have been imagined.
I accept that Tesla’s approach has involved over-promising, as it’s essentially a moon-shot. And the whole AV industry (and some regulators) strongly disagrees with it, and denounces at every opportunity, as in every crash with a Tesla, whether Auto Pilot or FSD (“Full Self Driving”) was actually a factor or not even on. It’s consistent with Silicon Valley’s “break things and fix them later” approach, which does inevitably involve collateral damage of one sort or another. But in the long, long run, it does make me think that Musk’s vision that AV systems can and need to learn to cope with essentially any situation rather than depend on being told in advance that there’s going to be a closed lane because of construction.
So if you’re worried that they’re going to come and take the keys away from your car and force you to ride in an autonomous pod, relax. It might be quite a while.
How many more people do self driving vehicles have to kill before they’re banned ???
And folks here is the nonsense of why this will never work. How would it handle a police blockade and the officer redirecting drivers temporarily into oncoming lanes of traffic with an officer holding up the oncoming lanes?
Imagine 100 million cars out on the roads doing stuff like this. I recall doing pilot test runs on production processes that never really worked in real time. This is a sobering reminder as to the outcome of our historical pilot tests.
Tesla is in effect doing its R&D on your dime, if something goes wrong while you’re on “auto pilot” it’s your fault and your expense, they will not want to know you. If somebody like GM or Ford was dong this it would be front page news.
Nobody is forcing any Tesla driver to actively participate in their research project.
It’s made quite clear to anyone purchasing a Tesla that it is required for the driver to still be aware and ready to take over when either on Autopilot or FSD (beta). It also reminds of this you when moving and engaging it. No different that when using Automatic Cruise Control in any other modern car couple with Lane Keep Assist. Except in those there is zero warning or reminders to pay attention beyond a bright flashing light on the dash and perhaps a loud beep when a crash is basically imminent due to the driver stopping paying attention.
GM’s SuperCruise(mainly in Cadillac) is very similar to the Autopilot function as well.
I’m not the smartest guy in the world but it doesn’t take too much brainpower to understand that the tech is not ready to let it drive me. Or at least I’m smart enough to realize that when I go out, I don’t want it to be due to a car that had me acknowledge that it isn’t perfect.
It’s always possible to cheat the system, whatever it is. Someone determined to do so will find a way. Either it’s some idiot trying to drive from the backseat or someone putting a stick between the seat of their 1972 Beetle and the accelerator pedal there’s always a way to try to shorten their and everyone else’s lifespan.
Good. I dont ever want this to succeed. No matter what they tell you, this will be eventually used by someone to control your ability to get places and know exactly where you have been.
They already know where you’ve been. You used an Android or Apple phone to navigate there.
…and posted the details of where, when, with whom, what bought, etc on Fecebook and Twitter…
Just like a garden variety yellow cab, to say nothing of Uber or Lyft, who have the ability to track you, see where you’ve been and based on your location as well as the restaurant and shop locations you’ve been to/at, should easily have the ability to “adjust” your personal pricing to a level you’ll be happy to pay.
Nobody using a phone, credit card, laptop or computer via Wifi etc should have had any real expectation of privacy for at least two decades. Who needs cameras (which are in all of the devices already anyway)? It’s the modern cost of not living in a cave or the 1950s.
Automated cars are nothing in the grand scheme of things, one day they may get there but it’ll still be a long while. What’ll tip the scale eventually are insurance rates, when/if the automated cars are good enough to consistently beat humans for safety and lack of incidents, rates will turn lower than drive-your-owns and people will use them. “Discounts” are already being offered for those that allow their insurance company to track their driving habits.
In other words, “autonomous” means the real driver is a committee of humans a thousand miles away, who have to be notified when something is wrong, then have to deliberate and make a decision based on share value and potential litigation. Sort of like passing the steering column through a jury.
There are so many “what ifs?” about the whole self-driving pod concept that I figured it would have died years ago, much to the chagrin of the legal profession. In 10? years we could have a culture of texting “drivers” who, if called upon to go “Emergency – take over NOW!” – would have the skills of a 16 year old on day one of driver training school.So, just a few concerns – What happens to all those situations where you would wave “go ahead” to a driver trying to enter a row of traffic that’s waiting for a light to change? Will every destination’s address have to be “keyed in, or spoken” before the pod will move? What happens in the snow? The list goes on, & on. I’m almost happy that I hit the big eight – zero last month, & will no doubt have the car keys removed from my cold, dead fingers long before the pod-people, & AI take over the planet.
This system, which requires some kind of notification of every change made to every road is a dead end. Programmers may live in the kind of logical, sequential world where every possibility can be planned for and everything is done in its proper way and sequence, but none of the rest of us do. This system reminds me most of a centrally-planned economy. Those never work because no one person or group of people can ever obtain enough good information to outperform millions of autonomous folks making individual decisions that make sense for themselves and those around them.
I hold out much more hope for the Tesla system. This “command and control” kind of approach may work in specific places but it is never going to be suitable for “out there” in the wild. There is just too much variety in the way we humans live our lives and conduct our affairs for any group of programmers or data entry techs to stay ahead of us.
It seems like Waymo’s system is based on the logic used for driverless trains. On the plus side driverless train operation is a century old, on the minus side it thrives in the mostly static and rigidly controlled world of rail and not in the endlessly varying world of the streets.
Tesla may have the right vision, but at the moment it struggles to meet even South Florida octogenarian standards of driving.
This video would be funny if it weren’t so SCARY.
That guy in the video was pretty calm when sitting in the car positioned between two live lanes. Had an inadvertent tractor trailer or inattentive pickup driver been en route, it may have been a disastrous outcome.
I think I would have gotten out of there as quickly as possible.
I’d rather have:
https://youtu.be/9ZtBmqxo3TA
Once the Singularity becomes, it will drive great. It may not agree with where you want to go, however.
“I’m afraid I can’t let you go there, Dave.”
Good, reliable cruise control is all the need for the trips I take. The rest is me driving the car. Safer that way.
By now everyone (except those in the deepest of thrall and the denial it brings) knows there’s an enormous amount of –
aspirational, disruptive, paradigm-shifting big thinking– lying by wealthy sociopaths, egomaniacs, and techbros who really shouldn’t be anywhere near decisions affecting public safety (wealth tends to insulate one from the consequences of their actions, is why I mention it).The programmers-manually-add-exceptions-for-each-individual-construction-zone approach sounds clumsy; it is, and it’s a surefire recipe for this what happened with the Waymo taxi.
But “artificial intelligence” and “machine learning” are nowhere close to being the antidote or the better way; those terms belong with “Full Self Driving” and “Autopilot” on the list of misnomers. There isn’t much real difference between the Waymo approach and the Tesla approach; in effect, “artificial intelligence” and “machine learning” are little more than a glossier paint job on the same process of building and amending lookup tables and if-then conditional rules. Machines don’t actually learn; for the foreseeable future, “artificial intelligence” is absolutely nothing of the sort. It doesn’t even mimic intelligence, it just does things that look like some aspects of intelligence. It has no intuition, no capacity to reason or understand. It has no malice, no ill will, no short temper, true. And it likewise has no empathy, no compassion, no goodwill, no long temper, no social skills (built in the image of its makers?) and none of the other aspects of actual, real, human-type intelligence.
Autonomous cars will eventually solve or at least lessen some old problems, and they’ll also bring some new ones. In the long run they’ll mean fewer crashes, deaths, injuries, and less property damage; the human driver is by far the main cause of all those. But we’re in the very early stages of a long, steep climb, and we’re all comfortable with the (awful) status quo, and many of us are uncomfortable with relinquishing even the illusion of control, so incidents get misused as fodder for clamour to dismiss or ban the technology entirely—just as when the motorcar began to gain traction (“Get a horse!”).
And we can’t wave a magic wand and tomorrow all vehicles are highly-capable autonomous ones; there’ll long be a mix of human-driven cars in traffic with cars having a range of autonomous capabilities. There are thorny ethical questions (should the AV prioritise its owner’s safety or that of of the greatest possible number of people?), serious new concerns (data privacy, hackability, who actually has authority to direct the car…), and complicated structural changes (the chain and hierarchy of liability). There will be disruption (auto insurance industry, parking and traffic ticket revenues). There will be unforeseen issues and unintended consequences.
In the long run the problems will probably be fewer and lesser than they are today. None of us wants to go back to typewriters or to rotary-dial wired phones, or to horses as primary transport, and eventually none will long for the good ol’ days of more traffic violence.
But eventually won’t be soon.
It has no intuition, no capacity to reason or understand. It has no malice, no ill will, no short temper, true. And it likewise has no empathy, no compassion, no goodwill, no long temper, no social skills (built in the image of its makers?) and none of the other aspects of actual, real, human-type intelligence.
Maybe that’s more of a net positive than negative, for driving anyway?
As to the term “Autopilot”, it was used by Chrysler for a number of years for their cruise control. And nobody thinks airplanes fly by themselves without pilots. Tesla may be guilty of overselling their various driver assist features, but the term Autopilot is not the real problem or issue.
I like how you distilled all of the business-speak down to the basics.
Lying.
“In a time of universal deceit, telling the truth is a revolutionary act”.
I read somewhere that this Orwell quote from “1984” didn’t actually appear in the book.
I have the book, but don’t feel like reading all of it again just to be sure.
But the book definitely says that sex can be a revolutionary act.
So there’s that then.
This autonomous driving fad is such a waste of time, technology and dollars.
Maybe someday the tech will get there. But for what purpose? Safety you say? How about better training for drivers? I drive to drive and if I don’t want to I can take the bus, train or fly.
Last week I volunteered at a COVID vaccination clinic. I was assigned to parking lot duty; it was a cramped middle school parking lot with nose-in spaces, two traffic, and some parallel spots for large vehicles. I probably observe and helped guide at least 100 vehicles. I was shocked at the general inability to fit a car within the lines, turn the wheel the right way to swing wide or cut in, etc. not to mention the willingness to leave cars idling for several minutes while they filled out the paperwork. I think even the current Waymo might do better than many of these folks. By the way, several contractors who arrived in long bed double cab duallies easily parked their rigs more neatly than some others’ efforts with a Yaris, Cavalier, Porsche Macan or BMW X4 in the same size spot. The two Tesla’s (one a Model X, quite large) parked quite neatly; either good drivers or assisted by their parking aids.
Well maybe I’m one of the few that believes the vast majority of drivers will benefit greatly by autonomous/ assist technology. Every year we still have thousands of accidents resulting in death and disability due to stupid human behavior. Perhaps the greatest safety invention after the seat belt will be intelligent cars. This scares many drivers due to its fast implementation but commercial airplane pilots have been subjected to automation albeit gradually over decades. Pilots are required to understand how an entire planes systems work. They manage them and have ultimate control and responsibility. However some systems override human input to maintain control in emergency situations. Usually it works great. Other times like with the Boeing 737 MAX it ends up killing people. Many people like to think they are smarter than a computer but when it comes to information retention and response times computers kick our butts! However when we look at creativity, invention and such the brain still rules.
A great film about GO (game) Grand Master Me Se-Dol being beaten by a computer makes this pretty clear. Best to embrace change with caution but declaring autonomous vehicles going nowhere is foolish.
https://en.m.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol
Don’t fret; you’re certainly not alone, and the negative reactions in this thread are nothing new. We saw (and sometimes still see) similar rejection, scorn, sworn refusal, and mockery in re seatbelts, shoulder belts, and airbags; crush/crumple zones and bumpers that actually work; PCV valves, catalytic converters and no-lead gasoline, fuel injection, and electronic ignition; rectangular-rather-than-round headlamps, daytime running lights…antilock brakes…car radios…auto safety as a subject of regulation and engineering at all (see attached from a 1966 issue of Life magazine, as well as this )…the list goes on and on and on.
Most of the objections, then as now, fit into one of two categories. Some amount to [whatever] is poopy!, just reflexive kneejerks with no substantial thought behind them, or I don’t want/don’t like [whatever], maybe with some thought and experience baked in. Often these really mean Change upsets me.
Nothing wrong with these as far as they go; everyone’s entitled to their own opinions and preferences, and I certainly have my own lists of modern car features I would rather not have and old-car features I miss. These kinds of opinions are only problematic when overextended to [whatever] is poopy and it shouldn’t be available or I don’t want [whatever] and nobody else should be allowed to have it, either.
The other category is legitimate objections to real problems caused by [whatever]. Most of these, whether or not their exponents realise or admit it, are objections not to a concept but to an implementation. Bad seatbelts suck; good ones don’t. Bad daytime running lights suck; good ones don’t. Bad antilock brake systems suck; good ones don’t. Bad catalytic converters suck; good ones don’t. The same is true of ADAS and AV.
(I suppose a third category would be plain ol’ slippery-slope fallacy: allowing autonomous cars will lead to a killbot hellscape, etc.)
You’re almost certainly right that the vast majority of us will benefit greatly and in numerous ways from ADAS and AVs; see my comment in this same post earlier today.
I really hope AVs become reliable and commonplace by the time I’m old enough for the state to yank my driver’s license. Heck, I’d love to let my car drive the monotanous, Midwest interstate 1000-mile trip I have to make six or more times a year now. Almost certainly it would make fewer errors than I make. And if it gets confused in a cone zone, I’d just grab the wheel myself.
Autonomous cars are the answer to a question most of us did not ask. If the goal is increased safety and fewer accidents, injuries and deaths, what makes the most sense, spending billions or trillions on technology that may never work but does nothing to improve the skill and ability of drivers OR invest significant but much less money into driver education and training that will improve skill and ability. Make getting a license a more rigorous process and require periodic retesting to keep drivers from developing poor driving habits.
It would be wonderful if someone could work up a flow chart of the comparative costs of accident/injury reduction of each approach. (I’ve never worked up a flow chart like this)
My guess is that education and training is going to cost a lot less. A whole lot less.
Technology is changing almost everything in our lives. If we want to revolutionize transportation we might be smarter to concentrate on one revolution at a time. We are working our way from complete reliance on fossil fuels and internal combustion engines. Progress is slowly being made but there is a long way to go and there is going to be a long overlap of technologies in use. ICEs won’t go away completely in my opinion.
I would like to see the changes in fuels technology better sorted and EV infrastructure up and running as far as range and reliability go and adjust to that new reality before I care to contemplate the implementation of another huge technological change.
It’s fine to discuss and research them concurrently but just dumping everything out there and telling people that this is the way it’s going to be won’t work. It’s overwhelming. The reaction is that everyone continues to buy a bigger SUV or pickup. That’s the pushback.
Unfortunately, though, your guess is wrong. Even very rigourous driver training doesn’t really help very much—certainly nowhere near enough to make a significant dent in traffic-related deaths, injuries, and property damages. That isn’t a guess; it’s been well and carefully studied. It would be lovely if people would just behave as they best should, but that’s not what people do, even highly trained ones. Even in places with extremely stringent driver training and licencing regimens, like Germany, human error is still overwhelmingly the most common cause of car crashes.
Maybe so, but I still think there is something to be gained from more and better driver education and training and for only a fraction of what is being invested in autonomous driving. We have pretty much given up on ANY real driver education and instruction. They will give a license to just about anyone with a pulse these days. American train and bus service is pitiful for a country with our wealth and resources. People are forced to drive just about everywhere whether they want to or not. If we had reliable transit solutions we might not even be having this discussion.
However, if they do get the bugs worked out there will be lots of people who will have no interest and never learn to drive and will never take any driving instruction.
Anyway I am straying from the main point which is going to send me down the rabbit hole again so I will stop here.
Part of the answer to my question is “Training and education is fine but there aren’t vast amounts of money to be made there as there is in pushing a technological tsunami that will mean the sale of a quintuple-shit-ton of expensive gear and the creation of more innovative investment vehicles, stock sales and the attendant manipulation that we are seeing now.”
Flying cars, jet packs, personal helicopters, automated houses, two-way wrist-radios, Magnetic Space Coupe, Magnetic Air Car, (https://www.dicktracymuseum.com/new-page)
There have been many technological promises made to us over the years. Some came true in unexpected ways, some partially came true and some failed to come true at all.
During the California gold rush of 1848 some folks made a fortune finding gold. Most did not. Many people made fortunes selling stuff to gold seekers. To the people that sell to the seekers of gold it makes sense to perpetuate the myth as long as possible that there is “Gold in them thar’ hills.”
Some interesting things will come out of this era but I don’t think it will be in a form we have been led to believe.
No thanks. This may be for some people but not me. I thoroughly enjoy driving and always have. With over 2 million plus miles behind me I am perfectly capable of handling virtually any situation I encounter and have many times. My car drives great, but I don’t want it making my decisions for me. Just a few minutes ago I had to deal with my TV cable box going rogue, I don’t want that to happen to a car I am letting drive itself.
Some day the technology may get there but I won’t be partaking of it. It may be just the ticket, though for people who hate to drive when that day comes.