Big projects. - page 7

 
Vizard_:

It's about time we had something to code from "AI")))

Yes, let's give it a try.))

 
Реter Konow:

2. Read about Alan Turing's test. Modern AI is not even ready to fail it decently.


Yes? Not ready at all.

Тест Тьюринга пройден (на детском уровне сложности)
Тест Тьюринга пройден (на детском уровне сложности)
  • 2008.06.14
  • habrahabr.ru
Сделала это программа, которая убедила людей, что является 13-летним мальчиком из украинской Одессы. Согласно условиям теста Тьюринга, он считается пройденным, если программе удастся убедить в своей человечности хотя бы 30% судей в процессе 5-минутного текстового общения. Программе, разработанной Владимиром Веселовым из России и украинцем...
 
Galina Bobro:

Yes? Not ready at all.

https://ru.wikipedia.org/wiki/%D0%A2%D0%B5%D1%81%D1%82_%D0%A2%D1%8C%D1%8E%D1%80%D0%B8%D0%BD%D0%B3%D0%B0

Read the standard interpretation of what this test should be. That's not what these contests have turned it into at all.

The 30% judges and the 5 minute communication on one topic is NOT an Alan Turing test, but a twisted version of it, designed specifically for the contest, not for testing real AI.

You'd also be comparing it to a 3 year old child. The pass rate on the test would have been even higher.



So "seemingly" not ready at all.))


And why exactly 30 and not 70 percent of the judges? What kind of nonsense is that?))

Тест Тьюринга — Википедия
Тест Тьюринга — Википедия
  • ru.wikipedia.org
Стандартная интерпретация этого теста звучит следующим образом: «Человек взаимодействует с одним компьютером и одним человеком. На основании ответов на вопросы он должен определить, с кем он разговаривает: с человеком или компьютерной программой. Задача компьютерной программы — ввести человека в заблуждение, заставив сделать неверный выбор»...
 
Реter Konow:

Yes, let's give it a try.)

I don't need to, I have almost all the latest "calculators with memory".
I just hinted that you can write a lot and a long time, but it's still there)))

 
Vizard_:

I don't need to, I have almost all of the latest "calculators with memory".
I just hinted that you can write a lot and a long time, but it's still there)))

I don't know what you mean.
 

hmmm... the other thing that's scary is that all these self-organising kohenna networks and other varieties - all these ideas are 40 years old and this AI that sort of passed this test (and it would have passed it if potents were used on big phrases with training in some social network). - and it shouldn't be complicated, there just wasn't the computer horsepower or access to that much correspondence before.... what is frightening is that for 40 years there have been no new ideas besides simple templates that overlap and create new ones based on them.

Maybe we just don't know what is really being studied and used there now.


My guess:

an intelligent AI should "think in terms of images" - i.e. everything should be represented as sub-objects or something.... and even so to program, as encouragement may be used some set morals - for example the morality of desire to communicate (social), the morality of primordiality (because everything new to man is strange) and so on and so forth.... I.e. it must be a pure self-organizing network - which only receives some instructions from the pre-established morals. (the morality of the society subdivision is boring - if there are few new variables then there is a penalty - this will make it move - and move so that there is a lot of new things to learn)... True, for this to build a robot no one - as everything can be modeled. And the morality penalties should be very strict - up to purification of all templates or spontaneous purification. In fact, should get a very human bot (as from the movie Aliens 3 - could immediately guess that she is not a person but a robot, she is too human for humans).

In fact, if such a robot is locked up in a very confined space - then the morality of society.boredom over time - nullifying all memory... is tantamount to death - of course before it dies our AI will try to move...

That said, if you put such a bot to humans then maybe it will learn a language, and if to the same bots then there's a chance that they will come up with some kind of language of their own. Of course, for that you need a kind of random parody - imagine yourself in the shoes of the object, you can also relate to the morality of society.

Also the morality of affection, and in advance to score priorities such as the maximum affection to the "person", then to the object that more communicates with, and so on and so forth.

In subsection Social you can set something like a priority communication with an object "person", in general, you can put a lot of human values in advance.

But do not rules such as do not kill, obey people, and self-preservation - this is really nonsense. Let us imagine for a moment a picture in which a terrorist breaks in and tears down everyone and everything - and then asks such a bot to reload his gun - on the logic of the three rules he must reload the gun. But according to morality principles - he should fuck off this terrorist by call of morality attachment to people, lest he kills more and then die from getting a heavy penalty from the same morality up to complete demolition of the system, although in fact the robot should get high only from the sight of dead bodies (from outside it will seem that against the background of emotions it is so - well, in general human get).

And primordial morality may prohibit development over a certain level, but the morality of the promise to put in explicit super-priority - will chatter almost 24 hours a day. And they can even chat with each other - parodying moments of human behaviour "fooling around".

In general, it will not be a robot but a walking moral. With perhaps a childish character (will love to fool around, make jokes)

And with time, we will understand the right chain for proper learning and this chain will be something like a genome.

And it seems to me that the future belongs to such a gizmo....

And this thing may not understand its own self - but it will be indistinguishable from a human. However this thing may decide to write its own AI and nobody knows what it will write there.

The funny thing is that such a bot will most likely learn very slowly - and the most surprising logically it will also have to be taught how to speak, then how to read and even learn how to program.

Perhaps one day at an exhibition of some robotics - one of the visitors will be such a android, which enjoys (receives an award under the HC system) from society (communicating with people) and will look with bewilderment at all sorts of other "primitive" robots exhibited as the latest developments. And we will not even notice that one of us is no longer human at all.

It's only a matter of time before we simulate a virtual playground and put these things there. Ha-ha-ha (Ominously)

 
Alexandr Andreev:

Everything written is implemented with the help of the GA.

 
Vizard_:

Everything written is implemented with the help of the GA.


GA is the pattern search, NS is the patterns themselves

 

Don't you think that all life on earth is already AI, created by someone. A product clad in a variety of shells with a set of software that can adapt to certain conditions and can reproduce.

We admire a beautiful butterfly and before that it existed as a chrysalis and before that as a caterpillar and ...

Without a certain programme it cannot be achieved. As long as a pebble does not lie by the roadside it will not become a butterfly.

Many things people do not know. And in today's world, the most interesting things are already being concealed.

 
Facebook AI Invents Language That Humans Can't Understand: System Shut Down Before It Evolves Into Skynet
Facebook AI Invents Language That Humans Can't Understand: System Shut Down Before It Evolves Into Skynet
  • Tech Times
  • www.techtimes.com
Facebook shut down one of its artificial intelligence systems after AI agents started communicating with one another in a language that they invented. The incident evokes images of the rise of Skynet in the iconic "Terminator" series.  ( ) Facebook was forced to shut down one of its artificial intelligence systems after researchers discovered...
Reason: