Psience Quest

Full Version: Even After $100 Billion, Self-Driving Cars Are Going Nowhere
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14
(2023-04-01, 04:30 PM)David001 Wrote: [ -> ]So you want to claim that the 'in the flow' situation requires nothing paranormal? Another interpretation might be that these states are accessing paranormal information.

David

not too sure what your meaning here?
Feds close probe into TuSimple autonomous truck crash

Alan Adler

Quote:A driver-supervised autonomous truck took an unintended sharp left turn across a lane of westbound traffic on I-10 and struck a concrete barrier. The safety driver tried to countersteer the truck, which followed a computer-generated command that was several minutes old.

TuSimple initially said the incident was driver error. Later it acknowledged its compute system and the safety driver both bore responsibility. The company grounded its fleet of mostly Navistar International trucks and made software fixes to avoid a repeat occurrence.

So the humans in the cars will be the "fall guys" when something goes wrong, in order to keep confidence in the AI...
(2023-04-01, 06:40 PM)Sciborg_S_Patel Wrote: [ -> ]Feds close probe into TuSimple autonomous truck crash

Alan Adler


So the humans in the cars will be the "fall guys" when something goes wrong, in order to keep confidence in the AI...

When driverless vehicles proved to be very hard to create, a sneaky alternative was to suggest that there were various levels of automation, starting with automatic parking and the like, and the penultimate step was vehicles that would drive but there would be a competent driver ready to take over if it screamed for help! This was a horrible idea because imagine that your truck drives automatically for some time, and then suddenly demands you take over.

Taking over in an emergency like that must be nearly impossible.

David
(2023-04-01, 06:09 PM)Max_B Wrote: [ -> ]not too sure what your meaning here?
Well I was suggesting that human drivers make some use of psi to avoid crashes. Then you introduced the idea of 'flow', and I am not sure if you see this as being a psychic process, or as just another way of using your brain.

David
(2023-04-01, 07:18 PM)David001 Wrote: [ -> ]When driverless vehicles proved to be very hard to create, a sneaky alternative was to suggest that there were various levels of automation, starting with automatic parking and the like, and the penultimate step was vehicles that would drive but there would be a competent driver ready to take over if it screamed for help! This was a horrible idea because imagine that your truck drives automatically for some time, and then suddenly demands you take over.

Taking over in an emergency like that must be nearly impossible.

David

Yeah the goal post moving would be funny if human lives weren't at stake.

I do believe that driverless cars may be possible in the future but using machine "learning" is not really AI, it's just hoping you get enough data to cover liability.

The kind of research that IMO is needed - bare metal higher level programming + specially designed hardware - is very much in its infancy. Likely because if one is honest about the domain space of driving we would have to admit we likely don't have the programming language nor hardware to truly express it properly at this time.

The issue with hype driven investment, a larger issue than just driverless cars sadly.
(2023-04-01, 06:40 PM)Sciborg_S_Patel Wrote: [ -> ]Feds close probe into TuSimple autonomous truck crash

Alan Adler


So the humans in the cars will be the "fall guys" when something goes wrong, in order to keep confidence in the AI...

(2023-04-01, 07:20 PM)David001 Wrote: [ -> ]Well I was suggesting that human drivers make some use of psi to avoid crashes. Then you introduced the idea of 'flow', and I am not sure if you see this as being a psychic process, or as just another way of using your brain.

David

Ok, depends on your perspective I suppose... but, neither of the two options you gave me really fit with how I understand things.
(2023-04-01, 07:41 PM)Max_B Wrote: [ -> ]

That was a sobering video, and it contained a good description of how a neural net 'decides' what something is.

David
Maybe the real problem with the neural net approach, and probably many other AI strategies, is that they only converge on the truth in the limit where every possible input is covered adequately in the training set - including images that are partially covered up - and then how does the AI know which part of the image is covered by something else?

I wonder how they perform on the "prove you aren't a robot" images where you have to click all the squares that contain a bicycle. Some of the squares only show a bit of the object in question, and you are meant to recognise those situations.

David
(2023-04-01, 10:24 PM)David001 Wrote: [ -> ]Maybe the real problem with the neural net approach, and probably many other AI strategies, is that they only converge on the truth in the limit where every possible input is covered adequately in the training set - including images that are partially covered up - and then how does the AI know which part of the image is covered by something else?

I wonder how they perform on the "prove you aren't a robot" images where you have to click all the squares that contain a bicycle. Some of the squares only show a bit of the object in question, and you are meant to recognise those situations.

David

They beat it just fine. So Google constantly needs to evolve it. (It’s a Google service you can add to your webpages) - I predict that within a few years no image can fool AIs.

https://www.inverse.com/science/captcha-tests-future-ai


Quote:Acartürk thinks that CAPTCHAs might become obsolete in the next couple of decades in favor of other authenticating technologies, such as retina scanning and fingerprint

[url=https://www.inverse.com/science/captcha-tests-future-ai][/url]
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14