New test reveals AI still lacks common sense

7 Replies, 638 Views

New test reveals AI still lacks common sense

Caitlin Dawson


Quote:"Humans acquire the ability to compose sentences by learning to understand and use common concepts that they recognize in their surrounding environment," said Lin.

"Acquiring this ability is regarded as a major milestone in human development. But we wanted to test if machines can really acquire such generative commonsense reasoning ability."

To evaluate different machine models, the pair developed a constrained text generation task called CommonGen, which can be used as a benchmark to test the generative common sense of machines. The researchers presented a dataset consisting of 35,141 concepts associated with 77,449 sentences. They found the even best performing model only achieved an accuracy rate of 31.6% versus 63.5% for humans.


"We were surprised that the models cannot recall the simple commonsense knowledge that 'a human throwing a frisbee' should be much more reasonable than a dog doing it," said Lin. "We find even the strongest model, called the T5, after training with a large dataset, can still make silly mistakes."

An old sticking point, but I'm not convinced machine learning really learns anything since it's all probability weighting.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 4 users Like Sciborg_S_Patel's post:
  • Ninshub, Typoz, OmniVersalNexus, Brian
The clue was in the word "concepts"  How can a concept occur without awareness?
[-] The following 4 users Like Brian's post:
  • Ninshub, Typoz, nbtruthman, Sciborg_S_Patel
(2020-11-19, 02:33 PM)Sciborg_S_Patel Wrote: New test reveals AI still lacks common sense

Caitlin Dawson



An old sticking point, but I'm not convinced machine learning really learns anything since it's all probability weighting.

I've seen nothing to indicate there is any advancement on this point either Sci.

Human ingenuity is an amazingly powerful thing so we'll see, but I remain doubtful.  The entire premise seems to be predicated on the notion that consciousness can be reduced to 0's and 1's.
[-] The following 3 users Like Silence's post:
  • Ninshub, Sciborg_S_Patel, Brian
(2020-11-19, 04:11 PM)Silence Wrote: I've seen nothing to indicate there is any advancement on this point either Sci.

Human ingenuity is an amazingly powerful thing so we'll see, but I remain doubtful.  The entire premise seems to be predicated on the notion that consciousness can be reduced to 0's and 1's.

I think we will see advancement when we start replicating brain structures like microtubules, which isn't to say Orch OR is definitely correct.

Then again androids may never be conscious, or their consciousness will be alien to ours. We after all have an internal ecosystem for our mental states, connecting even to our gut's bacteria it seems.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


Machine Learning Confronts the Elephant in the Room

Kevin Harnett


Quote:A visual prank exposes an Achilles’ heel of computer vision systems: Unlike humans, they can’t do a double take.



Quote:Score one for the human brain. In a 2018 study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease.

“It’s a clever and important study that reminds us that ‘deep learning’ isn’t really that deep,” said Gary Marcus, a neuroscientist at New York University who was not affiliated with the work.

The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we’ll want their visual processing to be at least as good as the human eyes they’re replacing.

It won’t be easy. The new work accentuates the sophistication of human vision — and the challenge of building systems that mimic it.


This article seems a bit old, so not sure if this issue has been overcome.

Will look around.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2020-11-19, 03:38 PM)Brian Wrote: The clue was in the word "concepts"  How can a concept occur without awareness?

Yes. Machine learning AI is as far from consciousness as ever - I don't think it can ever be conscious, but the tech writers and engineers of course insist on writing about it using terms that are inherently about human consciousness. "Learning" something inherently means understanding something, which means comprehension, understanding, abstract thought. All properties or capacities of consciousness, of subjectivity, agency. Learning algorithm software and its operation aren't conscious and have no potential for consciousness - they are as different from consciousness as wingnuts from the awareness of the color red.
(This post was last modified: 2020-11-20, 02:05 AM by nbtruthman.)
[-] The following 3 users Like nbtruthman's post:
  • Brian, Sciborg_S_Patel, Kamarling
Still sounds like Data Processing to me.

I should explain. When I started work as a junior programmer, our department was named DP. That is data processing. We had massive computers that churned through piles of data and poured out some results. Within a few weeks of my arrival (and I don't feel I should claim credit for this!), the department had been renamed IT - Information Technology. Same computers doing the same tasks but the renaming was meant to signify that it was not just any old data, but meaningful information. Now to a human, that can sound attractive, the idea of 'meaning' was certainly uppermost in the minds of the users of these systems, whether it was a senior manager making some plans based on the picture available, or some far-off customer looking at a bill or letter produced by these systems. Meaning is there for people. But the machines don't do meaning. They just process the data.
[-] The following 4 users Like Typoz's post:
  • Brian, Silence, nbtruthman, Sciborg_S_Patel
(2020-11-20, 12:45 AM)Sciborg_S_Patel Wrote: Machine Learning Confronts the Elephant in the Room

Kevin Harnett

This article seems a bit old, so not sure if this issue has been overcome.

Will look around.

Seems like these weaknesses are still there given the Nov 19 2020 article below ->

Anti-adversarial machine learning defenses start to take root

James Kobielus

Quote:Much of the anti-adversarial research has been on the potential for minute, largely undetectable alterations to images (researchers generally refer to these as “noise perturbations”) that cause AI’s machine learning (ML) algorithms to misidentify or misclassify the images. Adversarial tampering can be extremely subtle and hard to detect, even all the way down to pixel-level subliminals. If an attacker can introduce nearly invisible alterations to image, video, speech, or other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively.

IMO we won't get a good sense of how vulnerable these systems are until they are pervasive enough to make exploiting them profitable.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell



  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)