Big projects. - page 8

 

There seemed to be only verbal bots there, although if they associated each phrase with an object, then that's really something scary... So what, the bots were wasting the incredible power of the resource. I also wonder what their network was limited there. Although what they were talking about is of course very scary. I'll note here the communication between the two bots as I understood on the basis of which Facebook decided to knock them out of the grid.

Here's the dialogue:

Bob: "I can I I everything else." // I can I I everything else.

Alice: "Balls have zero to me to me to me to me to me.

well there's the whole point that this dialogue is the result of the second inclusion and at the beginning they even began to communicate in their own language - not English. Although if it's just verbal bots - then they are not dangerous
 
Alexandr Andreev:

There seemed to be only verbal bots there, although if they associated each phrase with an object, then that's really something scary... So what, the bots were wasting the incredible power of the resource. I also wonder what their network was limited there. Although what they were talking about is of course very scary. I'll note here the communication between the two bots as I understood on the basis of which Facebook decided to knock them out of the grid.

Here's the dialogue:

Bob: "I can I I everything else." // I can I I everything else.

Alice: "Balls have zero to me to me to me to me to me to."

well there's a thing that this dialogue is the result of the second inclusion and at the beginning they even began to communicate in their own language - not English. Although if it is only verbal bots - then they are not dangerous.

It's not about danger. It is about the "ability" not only to find a pattern (to use your slang), but also to generate something "new".
Therefore, it is better not to write too many letters, but to draw a block diagram for clarity and a little digging, which was suggested to Retega...

 
Overview — USPEX
Overview — USPEX
  • uspex-team.org
USPEX (Universal Structure Predictor: Evolutionary Xtallography...and in Russian "uspekh" means "success" - owing to the high success rate and many useful results produced by this method) is a method developed by the Oganov laboratory since 2004. The problem of crystal structure prediction is very old and does, in fact, constitute the central...
 
Реter Konow:

Read the standard interpretation of what this test should be. That is not at all what it has been turned into in these contests.

The 30% judges and the 5 minute communication on one topic, is NOT an Alan Turing test, but a twisted version of it, designed specifically for the contest, not for testing real AI.

You'd also be comparing it to a 3 year old child. The pass rate on the test would have been even higher.

Why one topic? It says "the subject matter of the questions was not restricted in any way". Moreover, don't get so hung up on the Turing test. You're extolling it here as something so fundamental and immutable. Although it is just an idea that was invented many decades ago, and has been much criticized ever since.

I am not an expert in this field, but it seems to me that it is quite biased to judge intelligence by correspondence. A person may not speak the language you are talking to him very well, so he won't understand you. He might not even be able to articulate himself at all because he lives in the Tumba-Yumba tribe, where they only use the "u" and "a" sounds. But that doesn't mean it has no intelligence. Correspondingly, the machine doesn't have to understand your language either.

But Turing's idea obviously doesn't imply such a scenario. That is to say, everything is initially driven into some kind of framework. In my opinion, this is such a poppy interpretation of AI, designed for the average person. You know, a robot walks up to you and says, "Greetings, Earthling." Does the robot need that?

 
Реter Konow:

Option 1: Using AI to carry out the evil designs of crooks and thugs.

And so - the near future.

Intelligent machine development is in full swing. Some company builds robots...

...a certain gang steals a batch of robots from the company's warehouses and hacks into their software. Hackers reprogram the drones.

...

For these robots to be dangerous to humans, they have to be purpose-built. Sharpened to kill and programmed with military skills. They'd still be significantly inferior to humans in combat, and they'd be out of action in packs.

Most importantly, in this variant, robots have no autonomy at all, which means they are no more dangerous than any other weapon that is completely controlled by a human.

Well, we're talking about the original fighting hardware. And we're not afraid of a fridge that goes berserk.

What makes you think they'll be inferior to humans in combat? Suppose there is a self-manipulated tank that can orient itself perfectly in space, recognise objects instantly and shoot straight. Would a human be able to resist it while controlling exactly the same tank? I think only under some non-standard conditions.

 
Alexey Navoykov:

Why one topic? It says that "the subject matter of the questions was not restricted in any way". Furthermore, don't get so hung up on the Turing test. You're extolling it here as something so fundamental and immutable. Although it is just an idea that was invented many decades ago, and has been much criticized ever since.

I am not an expert in this field, but it seems to me that it is quite biased to judge intelligence by correspondence. A person may not speak the language you are talking to him very well, so he won't understand you. He might not even be able to articulate himself at all because he lives in the Tumba-Yumba tribe, where they only use the "u" and "a" sounds. But that doesn't mean it has no intelligence. Correspondingly, the machine doesn't have to understand your language either.

But Turing's idea obviously doesn't imply such a scenario. That is to say, everything is initially driven into some kind of framework. In my opinion, this is such a poppy interpretation of AI, designed for the average person. You know, a robot walks up to you and says, "Greetings, Earthling." Does the robot need that?

Alan Turing's test is designed to test machine intelligence against human intelligence.

Speech is used to prevent cheating , as a computer can do tasks just as well as a human. Written speech is used in the test because any computer will fail the verbal one. "Indulgences" are created for the computer, not for a person. The test tests how much "thinking" a computer is like a human, not how much a computer can "fool" people by pretending to be a "Tumbu Yumbu" savage behind a curtain or a babbling baby. According to you, the judge must think that if one of the interviewees answers like an idiot, it's just a person with Down's syndrome and not a computer? )

A machine with the intelligence of a shark is nonsense. A shark is driven by the instincts of its life programme, formed in the course of evolution. A machine with such a program would be completely useless and uncontrollable. Its actions make no sense either for people or for it. This rampaging scrap metal will be neutralised as soon as possible.

The danger of such an AI to humanity is even less than from a fool in a tank, if only because the fool is still human and the shark is an animal. Such phenomena, in my opinion, should be defined as a failure of a complex system, not as a manifestation of AI. Also the failure of Alan Turing's test by a computer should be considered a failure, without justifying the result with arguments like:"the computer just "doesn't understand" your language, and it's also autistic". ))

 
Alexey Navoykov:

Well, we're talking about originally combat-ready equipment. We're not afraid of a fridge that goes berserk.

What makes you think that they will be inferior to humans in combat? Suppose there is a self-guided tank that can orient itself perfectly in space, recognize objects instantly and shoot accurately. Would a human be able to resist it while controlling exactly the same tank? I think only under some non-standard conditions.

Let's divide the danger of AI into two categories: "failure of a complex system" and "acquired self-consciousness".

About the failure of a complex system I mentioned above. If the brakes fail and the machine starts crushing people, it does not mean that it has realised itself and is taking revenge on humanity. Humans will always find a way to cope with such machinery.

The danger of the second category is seen by people as the isolation of the AI on itself and its interests, and the use of technological and military advantages to achieve its goals. At the heart of the scenario, people place the pinched ego of the machine, which is supposedly tired of working for humans and realizing its superiority wants to gain its independence.

This scenario is only possible if there is full self-awareness and psyche. As I said earlier, this cannot be created at our level of understanding Consciousness and Psyche.

We cannot reproduce what we absolutely do not understand.

 

For a person at a certain level of development, the problems-questions-problems that are within the scope of his horizon-level will be understandable to him. Everything else that is beyond his comprehension will be inaccessible to him, be it the motivation of an ant or the technology of hypothetical aliens.

Just think of how people who lived just 100 or 200 years ago saw the future, for they saw the world we live in as a fantasy. The way they understood technology today is beyond imagination. But they, and we, are all human beings. They wouldn't even understand what they're being asked to do on those tests, no matter how far-fetched the tests are.

Any test is a framework. Moreover, a test can tell a lot about the writer, about his level of development, about his framework.

 
Олег avtomat:

For a person at a certain level of development, the problems-questions-problems that are within the scope of his horizon-level will be understandable to him. Everything else that is beyond his comprehension will be inaccessible to him, be it the motivation of an ant or the technology of hypothetical aliens.

Just think of how people who lived just 100 or 200 years ago saw the future, for they saw the world we live in as a fantasy. The way they understood technology today is beyond imagination. But they, and we, are all human beings. They wouldn't even understand what they're being asked to do on those tests, no matter how far-fetched the tests are.

Any test is a framework. Moreover, a test can tell a lot about the writer, about his level of development, about his framework.

I see where your logic is going with this.

"If a computer doesn't pass the test, it doesn't mean it doesn't have intelligence. Perhaps it thinks differently than we do. Perhaps we shouldn't understand it and it shouldn't understand us." And so on...

Can you imagine if that was the question posed in school exams? )

 
Реter Konow:

I see where your logic is going with this.

"Just because a computer doesn't pass the test doesn't mean it has no intelligence. Perhaps it thinks differently than we do. Perhaps we shouldn't understand it and it shouldn't understand us." And so on...

Can you imagine if that was the way the question was posed in school exams? )


It's a strange thing. You supposedly know what I'm talking about... ...and based on that you attribute your conjecture to me. But it's your distorted vision that's turned my words upside down.

Reason: