Artificial Intelligence

22 Replies, 4103 Views

(2017-08-16, 05:02 PM)Typoz Wrote:
(2017-08-16, 04:55 PM)iPsoFacTo Wrote:
(2017-08-16, 03:02 PM)Silence Wrote: Now, artificial, man-made, conscious, sentient beings?  I'm dubious as to how or when such a thing might come to pass.

Makes me wonder. At what point would a machines 'brain' become so complex that it becomes self aware/conscious?

Is there any reason to think that complexity leads to either self-awareness or consciousness?

Lord knows I don't know but I can pose questions.  What determines complexity (in the broadest universal sense....not as in a jet plane is more complex than a paper glider, lol) anyway?

Is it whatever sort of complexity, in terms of the interactions there is between the atoms and molecules or aggregates of, say, within some black box (or brain) that takes place?

Is the sun complex? The universe? How about the vacuum with the infinite amount of virtual particle reactions taking place one instant to the next? Should they be conscious?
(This post was last modified: 2017-08-16, 11:44 PM by iPsoFacTo.)
[-] The following 1 user Likes iPsoFacTo's post:
  • Typoz
Artificial intelligence: China catching up to US in race for technological supremacy

Quote:Artificial intelligence from a Chinese tech giant has defeated the country's best player of the board game Go, despite giving the grandmaster an advantage — matching and perhaps surpassing Google's efforts last year.

Key points:
  • Tencent's AI defeated a Go grandmaster despite a handicap
  • Google's AlphaGo AI last year beat the same player without one
  • China is aiming to surpass the US in its AI capabilities by 2030
The artificial intelligence (AI) developed by Chinese company Tencent beat world number-two Go player Ke Jie last week with a two-stone handicap, the official People's Daily newspaper reported.

Handicaps are used in Go to even out the difference in skill level between players.
An interesting presentation with Julia Mossbridge from the Institute of Noetic Sciences and Ben Goertzel from Hansen Robotics: http://noetic.org/blog/communications-te...artificial.

A worthy goal and project. My main critique is that the distinction (as has been raised in this thread) between artificial sapience and artificial sentience was barely addressed. The only real coverage it got was when Ben was talking between roughly 35:46 and 36:25, where on top of saying that he has a view on the subject which he'd be willing to go into later (and we are not privy to that), for the practical purposes of the project it doesn't matter whether the AI "really" feels or whether it merely mimics feeling along with relevant accompanying cognitive processes/states.
[-] The following 2 users Like Laird's post:
  • Doug, Typoz
Man 1, machine 1: landmark debate between AI and humans ends in draw

Quote:The AI, called Project Debater, appeared on stage in a packed conference room at IBM’s San Francisco office embodied in a 6ft tall black panel with a blue, animated “mouth”. It was a looming presence alongside the human debaters Noa Ovadia and Dan Zafrir, who stood behind a podium nearby.

Although the machine stumbled at many points, the unprecedented event offered a glimpse into how computers are learning to grapple with the messy, unstructured world of human decision-making.

For each of the two short debates, participants had to prepare a four-minute opening statement, followed by a four-minute rebuttal and a two-minute summary. The opening debate topic was “we should subsidize space exploration”, followed by “we should increase the use of telemedicine”.

In both debates, the audience voted Project Debater to be worse at delivery but better in terms of the amount of information it conveyed. And despite several robotic slip-ups, the audience voted the AI to be more persuasive (in terms of changing the audience’s position) than its human opponent, Zafrir, in the second debate.

More on this first public debate by IBM's Project Debater in these videos:





The real-world applications described by the team behind it are fascinating - for example, aiding in the boardroom decision-making process by providing arguments from both sides of an issue, and analysing the arguments put forward by human board members.
[-] The following 1 user Likes Laird's post:
  • tim
Facebook Invented a New Language for Machines to Solve Complex Math Equations by Kevin McElwee.

Quote:Facebook’s model significantly outperformed existing software. When posed with complex integration problems, for example, Facebook’s model achieved 99.7 percent accuracy compared to Mathematica’s 84 percent. Mathematica and Matlab, trusted commercial software that tackles similar problems, use line-by-line calculations to achieve solutions, albeit after lengthy runtimes. The advantage of a machine learning model is that once a neural network is trained, solutions are delivered almost immediately. Facebook listed multiple problems its model could solve in half a second for which Mathematica and Matlab took more than three minutes.

As with any neural network, the model’s output isn’t guaranteed to provide correct answers. Models are built on pattern recognition and approximation, so engineers should be cautious before launching a rocket based on results spat from a black box. Even if mathematical nonsense were fed into the model, it would still return a guess. Thankfully, however, the model’s output can be checked by traditional computational techniques. To grossly oversimplify, a computer can plug in values for x to check that its answers make sense.
(2018-06-24, 11:59 AM)Laird Wrote: Man 1, machine 1: landmark debate between AI and humans ends in draw


More on this first public debate by IBM's Project Debater in these videos:





The real-world applications described by the team behind it are fascinating - for example, aiding in the boardroom decision-making process by providing arguments from both sides of an issue, and analysing the arguments put forward by human board members.


It's been awhile. An update on the Project Debater AI:
      

Quote:"The IBM had Watson face off in February 2019 against the winner of the 2012 European Debate. The topic in contention was subsidies that are applied to pre-schools.

In preparation for the February debate, IBM trained project debater on over 10 billion sentences mainly taken from newspapers as well as research journals. Key to the task was that each side was given 15 minutes to prepare. Watson (like a human) likely benefitted from as much time as possible to sift through all of that information at its disposal. After 10 minutes of debate on each side, the audience crowned the human as the winner of the debate. The defeat highlighted the intricacies present in a task such as debate and how hard it is for a machine to master some of the traits that can make some humans so skilled at it.

Recently, the IBM team brought back project Debater at the University of Cambridge just nine months after the first showdown. At the university at Cambridge, Watson debated a human on the dangers of AI ironically enough. Watson had two opening statements where it took opposing sides. The first statement was a pro-AI argument while the second was against AI. The machine learned traits and patterns from over 1,100 human submissions that debated different viewpoints on the topic. At the end, the audience was polled to see who was swayed more. The result was that a majority (by a tiny margin) believe that AI will ultimately be beneficial to society."


What makes me shake my head is that a lot of people believe (in what seems to be an act of almost religious faith) that such massive data processing engine AIs will be developed into conscious AIs with human-like abilities. When even at this level of development (and at any predictable advancement of it) there is nobody home in these systems, or in any other types of AIs. They understand nothing, since there is no "I" there to be able to understand. What's happening is a multitude of linked processors are sorting through astronomical amounts of human-produced writings and dialogue looking for patterns. Abstract thought, any kind of thought, true creativity, really "knowing" anything, are impossible to these systems. And no matter how massive these systems become the gap remains just as unbridgeable, because it is a fundamental category error.
(This post was last modified: 2020-06-07, 03:15 AM by nbtruthman.)
[-] The following 2 users Like nbtruthman's post:
  • Typoz, tim
I'm very sympathetic towards your perspective, nbtruthman. That said, I am fascinated by the questions of:
  1. How close to (a mimicry of) (conscious) human intelligence and creativity can (non-conscious) Turing machines come?
  2. In which specific domains related to intelligence and/or problem-solving and/or creativity can the best (non-conscious) Turing machines outperform the best (conscious) humans?
I don't think that we have definitive answers to those questions yet: progress is still being made, it seems.
[-] The following 2 users Like Laird's post:
  • Obiwan, nbtruthman
This post has been deleted.
"What makes me shake my head is that a lot of people believe (in what seems to be an act of almost religious faith) that such massive data processing engine AIs will be developed into conscious AIs with human-like abilities. When even at this level of development (and at any predictable advancement of it) there is nobody home in these systems, or in any other types of AIs. They understand nothing, since there is no "I" there to be able to understand. What's happening is a multitude of linked processors are sorting through astronomical amounts of human-produced writings and dialogue looking for patterns. Abstract thought, any kind of thought, true creativity, really "knowing" anything, are impossible to these systems. And no matter how massive these systems become the gap remains just as unbridgeable, because it is a fundamental category error."

Precisely, well put!
(This post was last modified: 2020-06-07, 11:48 AM by tim.)
[-] The following 1 user Likes tim's post:
  • OmniVersalNexus
(2020-06-07, 04:22 AM)Laird Wrote: I'm very sympathetic towards your perspective, nbtruthman. That said, I am fascinated by the questions of:
  1. How close to (a mimicry of) (conscious) human intelligence and creativity can (non-conscious) Turing machines come?
  2. In which specific domains related to intelligence and/or problem-solving and/or creativity can the best (non-conscious) Turing machines outperform the best (conscious) humans?
I don't think that we have definitive answers to those questions yet: progress is still being made, it seems.

These indeed are interesting questions. This perhaps relates closely to whether human technology can finally create an artificial general intelligence modeled on human behavior that comes so close that it could be classed as an actual "philosophical zombie" (p-zombie). This amounts to whether there ever will be the settling of the old debate over p-zombies.


Adapted from Wiki:

This is a thought experiment in philosophy of mind that imagines a being that, if it could conceivably logically exist, logically disproves the idea that physical substance is all that is required to explain consciousness. Such a zombie would be indistinguishable from a normal human being but lack conscious experience, qualia, or sentience.

Philosophical zombie arguments are used in support of mind-body dualism against forms of physicalism such as materialism, behaviorism and functionalism. It is an argument against the idea that the "hard problem of consciousness" (accounting for subjective, intrinsic, first person, what-it's-like-ness) could be answered by purely physical means, in particular by means of artificial general intelligence (AGI) systems.

Proponents of the p-zombie argument, such as philosopher David Chalmers, argue that since a philosophical zombie is defined as being totally (in its behavior and appearance) physically indistinguishable from human beings (especially in verbal conversation), even its logical possibility would be a sound refutation of physicalism, because it would establish the existence of conscious experience as a further fact.


I don't think definitive answers to the beginning questions 1 & 2 will ever be forthcoming. Just ever more accurate and convincing (yet still ultimately imperfect) simulacra of human behavior and appearance. But I think that eventually such AGI systems may get good enough to fool everybody, no matter how closely examined. Accordingly, I think that it is clearly, logically possible that such an AGI system could eventually meet the requirements (above) of a true philosophical zombie, therefore refuting physicalism in the mind-body debate.

Of course, philosophical debates are by their very nature endless. Some physicalists like Daniel Dennett counter that philosophical zombies are logically incoherent and thus impossible; other physicalists like Christopher Hill argue that philosophical zombies are coherent but not metaphysically possible.
(This post was last modified: 2020-06-07, 05:56 PM by nbtruthman.)
[-] The following 1 user Likes nbtruthman's post:
  • Laird

  • View a Printable Version


Users browsing this thread: 1 Guest(s)