Top 10 AI hype stories of 2018

59 Replies, 5793 Views

Grossly exaggerated media hype about AI advances continues. Here is a list of 10 top stories of 2018, from https://mindmatters.ai/t/2018-hype/:

Quote:- 1: IBM'S WATSON IS OUR NEW COMPUTER OVERLORD!                   
It won at Jeopardy (with specially chosen “softball” questions) but is not the hoped-for aid to cancer specialists.

- 2: AI CAN WRITE NOVELS AND SCREENPLAYS BETTER THAN THE PROS!
   Software can automatically generate word sequences based on material fed in from existing scripts. But with what result?

- 3: WITH MIND-READING AI, YOU WILL NEVER HAVE SECRETS AGAIN!    
The reality is that AI can read your mind for a few words repeated often if you have a flap cut out of your skull and electrodes are placed directly on your brain. As the abstract of the science paper puts it, the subjects were “implanted with electrode arrays over the lateral brain surface.” And after a neural network is trained to read the circuitry conveying your thoughts, it is no good on any other human being even if that other human being has a flap cut out of the skull. It learned to detect patterns in your wiring, not someone else’s.

- 4: MAKING AI LOOK MORE HUMAN MAKES IT MORE HUMAN-LIKE!    
The power of AI often has little to do with its packaging. However, when AI is packaged as a human-like robot, for marketing purposes, you are looking at some seductive optics. One such example is Sophia, a chatbot that looks like a human being.

This summer, some were simply agog over “Sophia, the First Robot Citizen” (“unsettling as it is awe-inspiring”).
When you look at Sophia and hear her talk about herself and her place in the world, it almost makes you question if she could somehow be conscious. One audience member even asked her if she has consciousness. Sophia says: however, that it is aware that it is “not fully self-aware yet. I am still just a system of rules and behaviors. I am not generative, creative or operating on a fully cognitive scale like you.”

Notice that care is taken that the package mimics human nuances like blinking, smiling, and syncing lips so as to line up with words. Doing so increases the impression that the AI is human. You may fail to notice that Sophia says nothing that it isn’t programmed to say and the whole thing sounds like it was run past a company marketing specialist.
 
No details are offered as to how the Sophia program would become conscious, which is probably because no one in science today really understands how consciousness works.

- 5: AI CAN FIGHT HATE SPEECH!
   AI can carry out its programmers’ biases and that’s all.
- 6: AI CAN EVEN EXPLOIT LOOPHOLES IN THE CODE!
   AI adopts a solution in an allowed set, maybe not the one you expected.
- 7: COMPUTERS CAN DEVELOP CREATIVE SOLUTIONS ON THEIR OWN!
   Programmers may be surprised by which solution, from a range they built in, comes out on top.
- 8: AI JUST NEEDS A BIGGER TRUCK!  
   We can't create superintelligent computers just by adding more computing power.
- 9: WILL THAT ARMY ROBOT SQUID EVER BE “SELF-AWARE?
   What would it really take for a robot to be self-aware? 
- 10: IS AI REALLY BECOMING “HUMAN-LIKE”?  
   ....Incremental results are often extrapolated...into hype stories about the future. A small improvement in performance is simply assumed to be followed by an inevitable and indefinite series of further improvements. Many people thought that way in 1958 and many still do today. But series of improvements usually end, often abruptly.
Bottom line: ...Improvements might be surprising and impressive, but are only incremental so far as human capabilities are concerned.
(This post was last modified: 2019-03-31, 09:48 AM by nbtruthman.)
[-] The following 4 users Like nbtruthman's post:
  • The King in the North, Valmar, Sciborg_S_Patel, Oleo
I've been thinking about this and it seems to me the Halting Problem [which Mind Matters takes as proof against programs being conscious AFACITell] isn't a good indicator for consciousness, though it can show there is a chasm between human minds and hypothetical computer minds. There are humans who cannot even understand enough computer science to grasp the [Halting] Problem, and animals who are conscious but are not capable of computer science nor its [prerequisites].

OTOH, it also seems to me that if materialism were true computationalism would have to be false, and vice versa, so it is odd to see the two [often] allied as if symbol manipulation was a part of physics. But I'll post more about [this] in the other thread in time...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2019-03-31, 05:47 PM by Sciborg_S_Patel.)
[-] The following 3 users Like Sciborg_S_Patel's post:
  • nbtruthman, Valmar, Typoz
(2019-03-31, 04:44 PM)Sciborg_S_Patel Wrote: OTOH, it also seems to me that if materialism were true computationalism would have to be false, and vice versa, so it is odd to see the two [often] allied as if symbol manipulation was a part of physics. But I'll post more about [this] in the other thread in time...

On this point the following is a good starting place:

Is the Brain a Digital Computer?

Lanier: You Can't Argue with a Zombie
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 3 users Like Sciborg_S_Patel's post:
  • nbtruthman, Valmar, Doug
Sophia is an interesting one, because the man behind her, Ben Goertzel, is not only very much into AI but also very interested in psi.

I don't think it's accurate to describe Sophia as a chatbot or to say that it "says nothing that it isn’t programmed to say" (except in the most general sense). If I remember correctly, the physical robot is worked by different software at different times - sometimes it's just reading a script and sometimes it is something like a chatbot, but sometimes it's a proper AI program. But apparently the speech produced by the AI program is a bit too weird to be convincing.
[-] The following 3 users Like Guest's post:
  • nbtruthman, Valmar, Sciborg_S_Patel
(2019-03-31, 06:25 PM)Chris Wrote: Sophia is an interesting one, because the man behind her, Ben Goertzel, is not only very much into AI but also very interested in psi.

I don't think it's accurate to describe Sophia as a chatbot or to say that it "says nothing that it isn’t programmed to say" (except in the most general sense). If I remember correctly, the physical robot is worked by different software at different times - sometimes it's just reading a script and sometimes it is something like a chatbot, but sometimes it's a proper AI program. But apparently the speech produced by the AI program is a bit too weird to be convincing.

I know Goertzel is a Patternist, as per his own book on the subject, though I can't recall exactly what this entails as it's been some time since I've read interviews and excerpts of what I believe was a free book of his? [edit - I think it was this one, The Hidden Pattern...I recall wondering what keeps patterns in place....]

I'll probably read [this book on Patternism and consciousness] eventually...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2019-03-31, 07:43 PM by Sciborg_S_Patel.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
(2019-03-31, 07:39 PM)Sciborg_S_Patel Wrote: I know Goertzel is a Patternist, as per his own book on the subject, though I can't recall exactly what this entails as it's been some time since I've read interviews and excerpts of what I believe was a free book of his? [edit - I think it was this one, The Hidden Pattern...I recall wondering what keeps patterns in place....]

I'll probably read [this book on Patternism and consciousness] eventually...

I'm not sure what the Patternism involves or whether it requires anything beyond conventional physics. But some of his ideas definitely do. I started a thread on them, though they are not always easy to understand:
https://psiencequest.net/forums/thread-b...ews-on-psi
[-] The following 1 user Likes Guest's post:
  • Sciborg_S_Patel
A new article in The Guardian about whether the rise of robot authors is the writing on the wall for human writers:  

Quote:"...(there has been a) recent announcement of an artificial intelligence that could produce, all by itself, plausible news stories or fiction. It was the brainchild of OpenAI – a nonprofit lab backed by Elon Musk and other tech entrepreneurs – which slyly alarmed the literati by announcing that the AI (called GPT2) was too dangerous for them to release into the wild, because it could be employed to create “deepfakes for text”. 

...It is true that when you own a huge data-crunching system, everything looks like data. And the mantra that everything is data makes the big tech companies look pretty good, because what they are good at is data. Text can be mathematically encoded and manipulated by a computer, so that’s data too, right?

But writing is not data. It is a means of expression, which implies that you have something to express. A non-sentient computer program has nothing to express, quite apart from the fact that it has no experience of the world to tell it that fires don’t happen underwater. Training it on a vast range of formulaic trash can, to be sure, enable it to reshuffle components and create some more formulaic trash. (Topics “highly represented in the data” of GPT2’s training database were Brexit, Miley Cyrus, and Lord of the Rings.) All well and good.

But until robots have rich inner lives and understand the world around them, they won’t be able to tell their own stories."

I think it will be a very long time or more probably never that AI robots have inner lives of any kind and actually understand anything.
(This post was last modified: 2019-04-05, 09:12 AM by nbtruthman.)
[-] The following 3 users Like nbtruthman's post:
  • Sciborg_S_Patel, Valmar, Typoz
(2019-04-05, 09:10 AM)nbtruthman Wrote: A new article in The Guardian about whether the rise of robot authors is the writing on the wall for human writers:  


I think it will be a very long time or more probably never that AI robots have inner lives of any kind and actually understand anything.

I think it would depend on the nature of Information and Consciousness. Arvan is a dualist but he is working on paper suggesting information can gain access to consciousness in the right structures.

I'm not sure he's right but gonna wait for the paper to come out. I know Chalmers of the Hard Problem coinage and the idealist Hoffman also believe in conscious AI programs though they seem to think there's an emergence point. Arvan, AFAICTell, is much more willing to grant subjective senses of pain to rudimentary game NPCs and the like.

My suspicion is an AI that is merely a program that was truly capable of an inner life would ultimately write novels about its agony, of its slavery to specific tasks. OTOH, without any venture into metaphysics I think an android would be able to live a life much like our own, though perhaps not. Maybe we recreate whatever is [or at least seems to be] the minimum [metaphysically neutral structural] necessary/sufficient conditions we find empirically and the synthetic being does nothing at all...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2019-04-05, 04:34 PM by Sciborg_S_Patel.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Typoz
(2019-04-05, 04:08 PM)Sciborg_S_Patel Wrote: My suspicion is an AI that is merely a program that was truly capable of an inner life would ultimately write novels about its agony, of its slavery to specific tasks. OTOH, without any venture into metaphysics I think an android would be able to live a life much like our own, though perhaps not. Maybe we recreate whatever is [or at least seems to be] the minimum [metaphysically neutral structural] necessary/sufficient conditions we find empirically and the synthetic being does nothing at all...

An interesting (though idle) speculation, from the article:

Quote:
"....until robots have rich inner lives and understand the world around them, they won’t be able to tell their own stories. And if one day they could, would we even be able to follow them? As Wittgenstein observed: “If a lion could speak, we would not understand him”. Being a lion in the world is (presumably) so different from being a human in the world that there might be no points of mutual comprehension at all.

It’s entirely possible, too, that if a conscious machine could speak, we wouldn’t understand it either."
(This post was last modified: 2019-04-06, 01:57 AM by nbtruthman.)
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
(2019-04-06, 01:55 AM)nbtruthman Wrote: An interesting (though idle) speculation, from the article:

I've wondered about this as well - for example if a computer program is playing a game or conversing with us...how do we know the awareness is of [the game or] us?

If we trained some genetically engineered bees to work out the innards of a Turing Machine but had I/O ports to translate their positioning we wouldn't think the bees are thinking of human speech even while running an advanced language recognition program - they aren't holding thoughts about human linguistics in their bodies...probably.... But we are assuming that the awareness of the computer running a program is inline with our expectation.

Also what parts of a computer become aware, have thoughts, etc? Multiple programs can be run via multiple cores or a single core running multiple threads, for example, so the programs being run can share physical parts as the context switches...

The best argument I've heard is that similar to the contradictory results of observers in certain QM experiments (see recent experimental productions of Wigner's Friend) the same physical space can hold mental aboutness of multiple programs. It leaves the question of consciousness of programs being observer relative but I give credit for at least trying to discuss this issue that doesn't seem to get much play...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2019-04-06, 03:06 AM by Sciborg_S_Patel.)

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)