Quote:
Originally Posted by 10023
That’s pretty easy though, isn’t it?
Part of the the driving exam here (and maybe in the States, but it’s been a long time since I took that) is a computer “hazard perception” test. It shows various clips of what you’d see driving down a street, and you have to click the mouse when a hazard appears. One of those can be a ball bouncing into the street, which of course could be followed by a child, or a cyclist passing you on the curb side.
If that’s been pretty standard for probably 20 years, then I’m quite sure that Uber and Waymo have it programmed into their AI.
|
There are a near infinite number of these situations with many alternative options. No, it isn't simple for a machine and won't be until machines can really think like humans. It takes supercomputers to run all the possibilities of weather simulations. This is simpler which is why it's possible for single small chips to do it but not actually "easy" which is why the chips that do it are pretty specialized and come from different sources than regular cumputer CPUs for example. They seem to have more in common with graphic chips and specialized gaming chips.
You are talking about the software and I'm sure you are right that the companies doing this work have attempted to program all this into their systems. I am actually talking as much about the hardware. But both are not so easy to get right and flawless--and these systems need to be near flawless before they are deployed because you can be sure the backlash will be tremendous if there are any significant number of accidents that are proveably the fault of the autonomous vehicle.
So far, both hardware and software seem to be doing a pretty good job. But the jury on near flawlessness is still out. Consider this accident:
Quote:
TESLA AUTOPILOT LIMITATIONS PLAYED ROLE IN DEADLY CRASH, NTSB SAYS
SEPTEMBER 13, 2017
The National Transportation Safety Board has determined the probable cause of a May 2016 crash involving a semitruck and a Tesla Model S, in which the electric sedan drove under the truck's trailer, killing the driver. According to the agency, the "truck driver’s failure to yield the right of way and a car driver’s inattention due to overreliance on vehicle automation" were determined as the probable cause of the crash.
. . . the sedan . . . collided with a semitruck that was crossing a divided highway. The impact ripped the roof off the Tesla sedan, which continued to travel several hundred feet . . . . (having passed under the semi truck). Early on, Tesla's semi-autonomous Autopilot driver assist system was viewed as a possible contributing factor in the crash, prompting industry observers to speculate that the system may have misinterpreted the appearance of a trailer several hundred feet in front of it while engaged, mistaking it for a highway overhead sign. The first fatal accident involving the Autopilot system had cast suspicion upon the limitations and operation of the system, which uses radar and cameras to interpret the environment around it . . . .
Perhaps the most jarring finding by the NTSB, one suspected early on, was the fact that Autopilot system could not identify the truck crossing the road directly in front of it.
|
Read more:
http://autoweek.com/article/autonomo...#ixzz598jn21uI
Tesla, of course, keeps saying its system is not fully autonomous and while that's true, it does suggest there are potential bugs in both fully and partially autonomous systems and that flawless full autonomy is not easy.