Autonomous Cars

Foxbat

None The Wiser
Supporter
Joined
Jul 24, 2003
Messages
10,413
Location
Scotland
If you put aside the agruments/debates on how it will affect employment, I think autonomous cars could drastically cut down on traffic deaths once the software and hardware are sufficiently evolved.

However, I was watching a recent episode of BBC's Click about them and it left me a bit perturbed.
Here's why:
In the article, they covered the death of a pedestrian by an Uber car in Arizona. Much of the blame lies with the 'safety driver', who was busy watching a TV show on her phone and failed to spot the pedestrian in time. But there's more. Comparisms between the autonomous car footage and normal dashcam footage taken on the same bit of road at the same time of night showed the Uber footage to be significantly below par. Why? Are Uber using sub-par components (perhaps to cut cost)? The viewing distance was much greater with the standard dashcam.

And there's more. Uber used a Volvo that had its own anti-crash software/hardware but it had been deliberately disabled by Uber because it conflicted with their own system, causing the car to jerk. Was this a case of the need for performance over-riding safety?

Here's a question for the lawyers among us. If I buy a car and deliberately disable the anti-crash system, then go on to kill a pedestrian, how liable am I? My own conscience tells me that I would be 100% liable but maybe that's just me.

Astonishingly (to me), Uber have been found to be not criminally liable for the woman's death. I would expect the safety driver to suffer the consequences of her negligence but I would have thought Uber must also bear some responsibility for disengaging a car manufacturer's safety system.

I am left to wonder if there is something political going on here - perhaps an attempt to try and keep the autonomous car industry out of the courts and therefore in a good light with consumers.

Whatever happens, many people are unhappy with the notion of driverless cars and this won't help dissuade them.
 
If you actively defeat the safety systems of anything, then I would think that you are responsible for any outcome of the use of that item.
Now, if you can use the manufacturer for not making their system “hack proof” is another question.:confused:

For me the Safety Driver is to blame.
They were employed for the exact purpose of being able to take control in case the car’s systems were not able to cope. They chose not to perform their duty.
People have tried and failed to sue Tesla for crashing their car while it was in autonomous mode. Tesla pointed out that the driver still has to be ready and able to take control if needed. So you can’t get in to a Tesla drunk and get it to take you home.

Also I doubt that any autonomous driving system is using just visible light. Smoke and fog [and even rain and snow] could render them useless. We had an autonomous car demo’d at work and it was using all sorts of interesting wave-lengths to see near and far [different frequencies for fidelity and bandwidth] and even around corners [but I didn’t really understand that bit, it had to do with Doppler shifts, echoes and AI].

As for full autonomous cars for the masses, I think they are decades off. The technological hardware might be there but the software isn’t. Someone will have to decide on the ethical values required for driving a car drive through crowded city streets. Given no other options does it kill, 3 elderly people? 2 children? A pregnant woman? Or crash killing its passenger? Humans, for good or bad can make that judgement, a machine will have to be told what to do, or at least learn what is acceptable to its passengers and society.
They do it for disasters, weighing up the “cost” of human life. Who is "worth" saving...

Car ownership as the norm for most people is probably coming to an end. I think universal ride sharing and limited use autonomous cars [taxis around purpose built environments] will come sooner. If it is not already here. Already I know of apartment blocks that have no dedicated parking but a few shared [electric] cars that can be booked ad hoc. Now military use? I can see that in the next decade.
 
I have just ordered myself a new Seat Ateca car on the motability scheme (for those not in the UK, this is a scheme to provide transport for disabled people and takes the form of a three-year lease with all costs covered except for fuel, in exchange for a disability money allowance).

The version I'm getting has traffic jam assist, which controls the steering, brakes and throttle at low speeds and brakes automatically if a pedestrian steps out, a lane-keeping assistant which stops you from drifting across lanes, rear cross traffic alert, blind-spot monitor, traffic sign recognition and adaptive cruise control. It is not a self-driving car, but it seems to me to have all the benefits with none of the drawbacks, assuming this stuff actually works.

I'm currently a 15-year-old driving a rust-bucket, so it will seem like total luxury to me, whatever!
 
At this point in the development of driverless vehicles, a pedestrian death is seen as big news;
What is not covered is how many times the system being used by this particular car recognized pedestrians and reacted as designed. I also wonder how many pedestrians were injured or killed when hit by vehicles being operated by humans during the same time.
Driverless cars are coming. My own car is just a Prius, but it detects vehicles, objects and pedestrians in all directions, maintains a standard vehicle following distance, warns and corrects for lane deviations, parks itself and brakes if an imminent collision is indicated by speed and distance.
Driverless accidents are big news now because the technology is still developing. In a decade or so, a driverless car accident will be even bigger news because it will be so rare.
 
Lots of good points.
I totally agree that the safety driver in this case is most responsible, but defeating an original safety system and replacing it with one that is essentially still unproven appears reckless to me (and I would argue, contributed to the death because if the original safety system had not been defeated, it might well have kicked in and saved the pedestrian's life even if the saftey driver had still not intervened by this time).

I also agree that any autonomous system must be using more than visible light - which makes it even more staggering that this pedestrian died in the dark. What happened to all those other detectors?

Finally, Uber has consistently failed to reach its target of 13 miles without a driver intervention. Two other companies were mentioned in the article. one had reached over 500 miles without the need of driver intervention and the other had reached over 5000. this speaks volumes about Uber's system and why I suspect they may be doing it 'on the cheap' or cutting corners.
 
I am left to wonder if there is something political going on here

I guess when we go ahead with something without knowing everything about it, that means compromise of some kind, which can be a good definition of political.

Unlike when people inflict injury on another person, cars have always gotten a different level of responsibility when they inflict injury. It's appears to start out that either the car was at fault, so it's not at fault and no one's at fault, like it was nothing personal, or the car is responsible and the driver isn't so much responsible for what happens. Both of which leave the supposed fault in the pedestrian's court, wearing dark clothes, not watching what they were doing, walking in the road. It can also be more of a crime to leave the accident scene than the accident itself.

Innocent until proven technologically guilty, is AI's defense, probably harder to prove. The program was trying the best it could, unlike people with no AI who might not have been doing the best they could.

In the world of business it is usually harder to assign blame to a non human entity than to a human. When blame is assigned, it can be the result of a person's decision that made that happen, and not the product itself, no matter how irresponsible the product appears to be.

I read an article that said calling car navigation and control systems Auto pilot is perhaps a mistake, or an attempt to imply a car's self driving systems are safer than they are. AI will improve road safety, eventually, right now the bugs are being worked out. Auto Pilot works good with aircraft but the great distances between vehicles mask problems that haven't been solved when the density of vehicles interacting is greatly increased, such as car traffic. Auto pilot doesn't do much when the aircraft are entering or leaving flight, unlike cars where that procedure is a good part of the experience.

It might be better to have two programs monitoring the situation when auto driving. A consensus decision could be better thought out electronically with true parallel processing, it would also have redundancy, use of different sensors. The problem is at this time we can't smoothly combine the efforts of 2 independent guidance systems in cars. It is assumed the external AI can handle the car better than the way the manufacturers can program their vehicles. How true is that? If the AI can't interact with the way the car is designed, then perhaps the AI is not ready to use.

When I see videos of robots walking and performing tasks, sometimes I wonder how much of the processing is done within the machine vs what it can send to a more powerful processor located outside of the robot's body. I guess there are rules for that. It does bring up the idea that the car's AI is relying solely on real time reaction response and not on past experience of related events. Those results can be programmed in for auto response but I don't think the car's AI is mulling over past experiences, deciding what is best, instead it is just reacting. I think medical computer programs are heavily using past experience to make decisions about what yo do next when making recommendations.

When there is heavy traffic and high speeds and the weather is bad, the outcome can solely be based on the idea that nothing bad will happen, it then becomes just a matter of chance that nothing out of the ordinary will happen. In bad weather the car AI has information about the location of the road from past navigation experience. If it doesn't have the prior navigation of the road's layout and the precipitation is too heavy, the AI car will pull over to the side of the road, or attempt to, until it can "see" again. Can they be fooled by the deep puddle that looks flat because the road drops off or do they have some kind of real time elevation detection running all the time.

As time goes on and the cars are receiving information from the roadway, traffic signals, and other external situation reports, in some areas, voluntary overriding of auto driving could be a crime in itself. It would be amusing if when the road sign has a speed set on it, that all the cars will not exceed that speed whether the AI is driving or not.
 
If I buy a car and deliberately disable the anti-crash system, then go on to kill a pedestrian, how liable am I?
My car has this, and various other, safety systems. The car's manual is liberally sprinkled with statements** such as the one that comes with Front Assist (my car's radar-controlled warning*** and emergency braking system):
■ The system only serves to support and does not relieve the driver of the responsibility for the vehicle operation.
■ The system has physical and system-related limitations. For this reason, the driver may experience some undesired or delayed system responses in
certain situations. You should therefore always be alert and ready to intervene!
■ Always adapt your speed and safety proximity to the vehicle ahead to the current visibility, weather, road and traffic conditions.
■ The increased passenger protection afforded through the system must not tempt you to take greater risks than otherwise – risk of accident!
■ The system does not respond to crossing or oncoming objects.
Okay, this is the manufacturer trying to avoid any blame, which it's squarely placing (correctly IMHO) on the driver.

Note that I too have problems with Front Assist: on approaching a roundabout on a road with multiple lanes, the system will warn -- and even brake -- If I'm driving along a lane that is itself clear but, because the road curves to the left as it joins with the roundabout, assumes that I am driving towards one or more stationary or slow moving cars that lie across my path. I can't steer to alter this: I would either hit the kerb or crash into cars to my left. Despite this, I have not felt the need to disable the system. After all, it helps to protects me from, for example, a car pulling out of a side road -- or, if it's coming the other way, is turning right across my lane and into a side road -- when the driver has overestimated the time and space they have to make the manoeuvre.

That Uber's software cannot cope with systems that it knows are on, and should be functioning on, the car indicates to me that they should really not be using such obviously poorly specified software on cars that are driving on public roads.


** - That last bullet point is interesting: I don't know if this applies to the system that Volvo uses.

*** - It's also used for the adaptive cruise control, where it, without there being an emergency, adjusts the speed of the car based on the speed and proximity of the car in front.
 
There's a major flaw with self-driving cars.

They require someone to be behind the wheel. Not driving, but constantly alert. Ready to shift at any moment from fiddling with their telephone or playing I, Spy to immediately take evasive action, or slam on the brakes.

That's far harder than actually paying attention when driving oneself. It's a massive drawback, and one that makes me think the utility of self-driving cars has to be heavily compromised.
 
There's a major flaw with self-driving cars. They require someone to be behind the wheel.
There's problem with them is they don't require someone to be behind the wheel. At the moment, if I am crossing a road where there are cars apparently parked across** the road from me, I can tell whether they may be about to move because they have someone sitting in the driver's seat.

A future fully autonomous car may not have anyone in the car at all and yet, if it was, say, an uber cab, it might decide to pull out at any moment. Unless it is required to signal the manoeuvre well in advance, I could find myself approaching a gap between that car and the car in front of (or behind) it, and the road between me and that gap, that are about to cease to exist, with less than optimal consequences.


** - Let's pretend for a moment that no-one crosses the road from between a couple of parked cars.... :rolleyes:
 
In the not-so-distant future, autonomous vehicles will no longer need to anticipate the moves of vehicles being driven by humans.
All vehicles will be self-driving and networked. Each vehicle will "know" the exact positions and programmed moves of all the other vehicles on the road. Drivers will be restricted to racetracks and antique parades.
Traffic signals will be eliminated. Speeds will increase dramatically. Accidents will become statistically insignificant.
If I am so fortunate as to live long enough, I look forward to calling a car, telling it where I want to go, darkening the windows and taking a nice nap on the way to my destination.
 
In the not-so-distant future, autonomous vehicles will no longer need to anticipate the moves of vehicles being driven by humans.
Even here, in highly urbanised England, we have wild and untethered animals roaming about the places (e.g. deer and even horses).

And then there are the lesser spotted cyclists and pedestrians (who do not have jay-walking rules to obey), not to mention their dogs (and yes, I do mean cyclists, as well as pedestrians, roaming about the place accompanied by their pet pooches).
 
Even here, in highly urbanised England, we have wild and untethered animals roaming about the places (e.g. deer and even horses).

And then there are the lesser spotted cyclists and pedestrians (who do not have jay-walking rules to obey), not to mention their dogs (and yes, I do mean cyclists, as well as pedestrians, roaming about the place accompanied by their pet pooches).
OK, detection technology for pedestrians, pets, cyclists and wildlife will need to be perfected before autonomous vehicles can rule the streets and highways. Maybe all but the wildlife could be chipped to add them to the network and underwrite their safety.
 
It is one thing to have a power failure for objects that are stationary, like buildings, but what happens for large masses of moving traffic when the internet burps, suffers from latency, a term makes self driving car manufacturers shudder, or outright power failure, what does that do to all those vehicles relying on electronic contact instead of good old fashioned line of sight with a brain that can adapt to whatever it is seeing in a moments notice. Doesn't even take into account hacking. How does such a system react, does it slam on the brakes for everything moving at that moment. Slowly slow everything down as it shifts everything moving over to sensors riding on every car. Everything slows down to a complete stop. Or do we let the manufacturers assume that is never going to happen.
 
OK, detection technology for pedestrians, pets, cyclists and wildlife will need to be perfected before autonomous vehicles can rule the streets and highways. Maybe all but the wildlife could be chipped to add them to the network and underwrite their safety.
I can see there is a bit of a problem in convincing Everyone that they need to be Chipped so "the machines always know where you are..."
If someone wants autonomous cars to take over, then it is up to them to make the cars safe from me and not make me safe for them.
Limited automation I can see coming soon. Didn't Mercedes trial drone trucks on an autobahn earlier this year? It had one truck with a driver and six autonomous cargo trucks if I remember...
 
And then there are the lesser spotted cyclists and pedestrians (who do not have jay-walking rules to obey), not to mention their dogs (and yes, I do mean cyclists, as well as pedestrians, roaming about the place accompanied by their pet pooches).

As a keen cyclist myself, I am often horrified to see other cyclists with their dogs tethered to their waists. It would only take the dog to sniff a deer and take off for the cyclist to be pulled out of the saddle and probably suffer some injury. One thing I've learned over the years cycling is that it is not a relaxing pursuit. You need to be vigilant and on your guard at all time. This probably applies to all modes of transport including the autonomous kind. But some folk will never learn...

So you can’t get in to a Tesla drunk and get it to take you home.

This made me wonder. Instead of a charge of Driving whilst Under The Influence, could we, in the future, see a charge of Not Driving whilst Under The Influence :D
 
In the not-so-distant future, autonomous vehicles will no longer need to anticipate the moves of vehicles being driven by humans.
All vehicles will be self-driving and networked. Each vehicle will "know" the exact positions and programmed moves of all the other vehicles on the road. Drivers will be restricted to racetracks and antique parades.
Traffic signals will be eliminated. Speeds will increase dramatically. Accidents will become statistically insignificant.
If I am so fortunate as to live long enough, I look forward to calling a car, telling it where I want to go, darkening the windows and taking a nice nap on the way to my destination.

My daughter works on a project an Coventry University developing such a system where the cars 'talk to each other' continuously. The problem is that there is little communication between the manufacturers to come up with a standard protocol.

Considering the wealth of regulation that manufacturers have to pay regard to before a vehicle is allowed on the road (seat belts, crumple zones, emissions etc.) I'm surprised that there isn't a world-wide body controlling and testing driverless cars. Perhaps there is?
 
My daughter works on a project an Coventry University developing such a system where the cars 'talk to each other' continuously. The problem is that there is little communication between the manufacturers to come up with a standard protocol.

In the Click article, this was touted as one of the big advantages - that autonomous cars all over the world could learn from each other. Almost like Skynet on wheels.
 
If manufacturers cannot even agree the basic interface by which their cars might communicate, how are they going agree the interface for all the very complex data that would be required to allow such "learning"? And who/what exactly is determining what the lessons to be learnt are?

Even if both those issues could be solved (although I'm not sure how they, particularly the latter, could be), why would the authorities responsible for safety regulations allow such behaviour-altering data to be exchanged between all those individual cars? It isn't the cars on the roads that should be "learning", but those in the testing labs.

I'd rather updates to the cars' software and data would be thoroughly tested before they were uploaded (randomly?) into a car already travelling on a road, particularly a car that was, because of the supposed superiority of its operation over that of a human, driving much closer to other cars than a human driver ever should.

And what if different manufacturers' cars "learn" different lessons based on the new data they receive (assuming that they're ever in a position to receive it), or react differently even if their interpretations of that data are similar?


TL;DR: The road is not the place to be doing alpha or beta testing.
 

Similar threads


Back
Top