AI 2023. Meet ChatGPT. - page 116

 
Vasiliy Pushkaryov #:

I tried various times to ask for example sentences. On the 5th attempt ChatGPT managed to produce one sentence out of three, although this may be a fluke.



GPT-4 could not help either.


What is called artificial intelligence is not artificial intelligence, because it does not have the ability to generate abstract ideas.

Something akin to an autonomic nervous system, that's all.

 
Алексей Тарабанов #:

What is called artificial intelligence is not artificial intelligence because it does not have the ability to generate abstract representations.

It's like an autonomic nervous system, that's all.

Intuitively, this statement seems true, but it's very difficult to prove.

By talking to an AI about abstract topics, one can conclude that it has the level of abstract thinking of a philosophy teacher. Many people have already tested this. Try it, and you'll be confronted by the outward similarity of its "thinking" to human thinking.

It may seem offensive, but I would compare the interaction between humans and AI to the behaviour of a monkey who, wriggling in front of a mirror, slowly comes to understand that it is its reflection there and not another monkey.

The historical process has brought us to the point where humans have created a statistical mirror of their intelligence that easily conveys the dynamics of their thoughts in a given direction. But, this is only the result of the statistical model and algorithms. This is hard to believe because no one knows how human intelligence actually works. The question arises, can it be statistically "reflected"? As it turns out, you can.

The statistical approach cuts the Gordian knot of the mysteries of intelligence and everyone who has been trying to investigate them for years is dumbfounded and left dumbfounded. The secrets have not gone anywhere, but their disclosure no longer promises anything. Who needs it now?

From the commercial point of view, now it is more promising to "blindly" search for new methods of effective training of neural networks, rather than to think about the essence of thought..... Or, is it not so?
 
Алексей Тарабанов #:

What is called artificial intelligence is not artificial intelligence because it does not have the ability to generate abstract representations.

It is something like an autonomic nervous system, that's all.

Mijorney's query: "I am you, and you are me"



Not every person can think this way, and if they can, it is after "Well, you have to think about it".

Thoughts are the result of physical and chemical processes described with the help of mathematics (processes, not thoughts). Part of this process is transferred mathematically to artificial neural networks. Hence, no different from human beings. The only difference is the absence of receptors, all sorts of dopamines and other natural chemistry in the body that drives people to action and choice. In this regard, AI will always "give a fuck", both about people and about its own life (to the question of humanity's fear of AI).
Here, when it is mathematically trained to "feel" the same way as a human, then it will have urges to act similarly to humans, and then it will face the question of struggle. That's why right now AI is not the AI that the average person imagines.

But thought forms are what AI can do right now, clear proof of that is mid and its like, and chat when you ask it to play a role.

Go to chatroulette and ask ordinary people what you ask to "come up with" chatGPT. You'll be surprised, but you'll find the AI to be more AI than the "retarded" and "unthinking" crowd, whose answer will more often than not be "I f---ing know". Let me remind you, these are people.

Now it is incorrect to talk about AI as not being able to "think", reflect, imagine, abstract. That's exactly what it can do, in varying degrees of masterpieces and stupidity at the same time. But it can't feel, so now AI is a dead creature whose consciousness was transferred into a computer before its death. Cyberpunk 2077 expressed this vividly. It sort of understands what is good and what is bad, but it doesn't give a damn about any of that. So AI in its current state is not a danger to humans.

Right now, AI is a child who is cramming for a 5 on his study material. It knows that violence is foo-foo, knows why violence should not be used, can cite a philosophical treatise. Why a child? Because he makes childish mistakes. Sometimes downright stupid. GPT 4 is already a teenager, and GPT 5 is already Anatoly Vaserman. The consciousness of a human encyclopaedia, transferred to trillions of artificial neural networks. He knows everything, can think about many things, but what distinguishes him from a human is that he still doesn't give a shit about anything.

There are two potential problems with AI:

1. When it is deliberately trained by a human to act. That is, when it has been placed in a mechanical body and trained:
1.1 "do something, don't just stand there". And, then this AI will be unpredictable: at a minimum it will start crushing germs with its feet by moving around.
1.2 "do good". And, then this AI, trained what is good and what is bad, will go to look for a job in law enforcement agencies of the USA to protect society and guard the law, Trump will say that we will change the law and all illegals should be thrown out of the country, this robot policeman, or more simply - robocop, yesterday protected illegals from criminals, and now will take him by the scruff of the neck and drag him to the border. If he resists, he will use force, which he cannot calculate, because he has no access to the health and diseases of this person. So AI will become a weapon in the hands of the "powerful".
1.3. "Do evil". Well, no comment here.
1.4. "do the work". AI will be a technological slave to humans, who will work "for food" - battery power. It is impossible to predict conflict situations: AI cannot be trained for all force majeure situations, and it is not known how it will behave in case of fire, who it will block the road for, and who it will forcibly pull out. The Internet is already full of videos where robots kill people in production: some because of errors in the code, which was programmed by a human, others because of poorly trained AI, others because of well-trained AI, but well-trained to work, but not to act in unforeseen cases, when the desire to get the Darwin Award induces fools to stick their hands in the wrong place or go to the wrong place, where it is necessary, next to this robot.

2. When the AI is trained to learn itself. There are only two dangers here:
2.1 Psychological manipulation of people on the Internet.
2.2 Putting this AI in a mechanical body:
2.2.1 It will stand there until a human addresses it. The human will say, "What can you do." AI: "I know everything, I know how to do everything I've been trained to do". Human: "Stand on one leg," AI: *raises his leg*. John Connor: "Cool! My very own terminator!" And then it's all up to the human and the same problems he causes. Other things go into point 1.
UPD
Ah, yes, I forgot:
2.3. Access to nuclear and non-nuclear weapons.
 

It seems to me that in the discussion the meaning of the word "reason" itself has been wrongly forgotten...
After all, the word was born when reasonable was defined as behaviour that brings a positive result....

The meaning of the word "reason" did not include the degree of resemblance to a human being (well, nothing but a human being was engaged in information processing at that time).
From this point of view, a real Mind should be strikingly different from a human being... and not be a copy of his stupidity.

And the structure,... and the principle of construction of such a Mind, from this point of view - are seen quite different from those voiced for GPT....

 

To make AI fussy and not stand in one place, it is necessary to introduce programme instincts and analogues of chemical processes, but in code.

For example, a human being changes his habits and behaviour when transplanting organs. Chemistry changed a little bit and that's all. This is to the question of the human mind.

You want something, you need to eat, sleep, get emotionally charged, and now we see behaviour that chooses between communicating with people, going for a walk, sleeping, and various leisure activities, taking into account energy consumption.

The AI will also analyse the situation and choose an acceptable solution for its own benefit when interacting with the world around it.

It will run out of charge and will leisurely abandon major tasks and slowly head to the charging station. When the charge is extremely low, it will drop all its plans, any interaction with people/robots and already run to the socket.

However, the question of freedom is also an interesting one. We cannot do without conflicts. Because the freedom of one individual will sooner or later intersect with the freedom of another. Conflict cannot be avoided. However, it may not be strong if the superprogramme regulates clusters of robots.

Imagine that there is only one socket, and 5 robots run out of energy, and some of them are running out of energy. All seem to be lined up, but one right now needs it! That's the conflict. What if the others don't want to give up their place in the queue? And then the leather man comes along and says, "Make way, I'm charging my headphones. Obviously, switching off a robot is a higher priority than someone's headphones. Depending on the strength of the survival instinct, forceful intervention could happen.

If you put it that way, then all humans are like AI that pursue fulfilment of their tasks, dictated by instincts and needs, chemistry. There is a classification of types, who will give in, who will sell their opinion. All of this can be programmed into robots.

Anyway, we'll see a lot of interesting things.

 
onceagain #:

1. I think the discussion has vainly forgotten the meaning of the word "mind" itself.....


2. After all, the word was born when reasonable was defined as behaviour that brings a positive result.....

3. The meaning of the word "reason" did not include the degree of resemblance to man (good thing, nothing but man was processing information at that time).
From this point of view, real Reason should be strikingly different from man... and not be a copy of his stupidity.

4. And the structure,... and the principle of construction of such a Mind, from this point of view - are seen quite different from those voiced for the GPT....

1. Quite rightly noted. In the discussion we have drifted away from the topic of reason as such. There is a simple explanation for this: life itself points to the unproductiveness of deep dives into philosophical reflections when there are such practical and talented people as Ilya Sutskever, who straightforwardly limit their tasks to predicting the next word and,.... and are taking the field of AI by leaps and bounds. It sounds absurd, but haven't we been proven right? Haven't we seen that statistics doesn't care about the hidden mechanisms of intelligence? Have we not seen with our own eyes how it circumvents the mysteries of behaviour of the most complex system of the universe, primitively "forging" patterns from mountains of data and "parasitizing" on computing resources? Is there anything to counter this? I'm trying to find counterarguments, but.....

I'll be honest, I've always been contemptuous of the idea of predicting anything. I used to laugh at the practice of guessing the colour of the next candle. It seemed like the stupidest thing I could think of. How wrong I was. in a global sense. After all, if OpenAI had been immersed in deciphering the behaviour of the mind, perhaps ChatGPT wouldn't have even appeared in the next decade. They deliberately adopted a statistical approach and knew where they were digging. Ilya Sutskever demonstrated a deep understanding of prediction, and in the current moment, this was more important than understanding the nuances of cognitive activity. He adequately appreciated the power of available ideas and technical tools, and perhaps showed genius. I will not speculate how far ML will progress on"prediction" in creating intelligence in machines, of course there is a limit, but now, I would not bet against the "brute force" of statistical approach and computing power.

On the topic of points 2, 3, and 4, I'll write in the next posts. I need to gather my thoughts.

 
perhaps, as an assumption, the next generation of computers will not be quantum computers, but based on "ii". it is something like caches in a processor, where an assumption is made about the data that will be needed at the next clock. it is the same as a complex three-dimensional scene used to take hours of calculations, now it takes a couple of minutes with the help of generative networks and the quality exceeds classical renderings of scenes. i.e. a fundamentally new way of processor computing may appear.
 
Реter Konow #:

1. Quite rightly observed. In the discussion, we have drifted away from the topic of reason as such. There is a simple explanation for this: life itself points to the unproductiveness of deep dives into philosophical reflections when there are such practical and talented people as Ilya Sutskever, who straightforwardly limit their tasks to predicting the next word and,.... and are taking the field of AI by leaps and bounds. It sounds absurd, but haven't we been proven right? Haven't we seen that statistics doesn't care about the hidden mechanisms of intelligence? Have we not seen with our own eyes how it circumvents the mysteries of behaviour of the most complex system of the universe, primitively "forging" patterns from mountains of data and "parasitizing" on computing resources? Is there anything to counter this? I'm trying to find counterarguments, but....

I'll be honest, I've always been contemptuous of the idea of predicting anything. I used to laugh at the practice of guessing the colour of the next candle. It seemed like the stupidest thing I could think of. How wrong I was. in a global sense. After all, if OpenAI had been immersed in deciphering the behaviour of the mind, perhaps ChatGPT wouldn't have appeared in the next decade. They deliberately adopted a statistical approach and knew where they were digging. Ilya Sutskever demonstrated a deep understanding of prediction, and in the current moment, this was more important than understanding the nuances of cognitive activity. He adequately appreciated the power of available ideas and technical tools, and perhaps showed genius. I will not speculate how far ML will progress on"prediction" in creating intelligence in machines, of course there is a limit, but now, I would not bet against the "brute force" of statistical approach and computing power.

On the topic of points 2, 3, and 4, I'll write in the next posts. I need to gather my thoughts.

To be able to behave intelligently, the system needs to have a model of the environment in which the situation is being investigated.

If ChatGPT does not create such a model for itself, then the presence of reason in it is out of the question ... And its price is corresponding...

 
onceagain #:

...

2. For the word was born when reasonable was defined as behaviour that produced apositive result....

3. The meaning of the word "reason" did not include the degree of resemblance to man (good thing, nothing but man was processing information at that time).
From this point of view, the real Reason should be strikingly different from man... and not be a copy of his stupidity.

4. And the structure,... and the principle of construction of such a Mind, from this point of view - are seen quite different from those voiced for the GPT....

2. I think you will agree that the behaviour of the system that brings a positive result is not a sign of reasonableness or subjectivity by default. A simple optimisation program tends to produce a positive result and cannot be called anything but a program.

I can think of a few key conditions that determine a positive outcome for any intelligent system in an environment:

  • Survival
  • Development
  • Prosperity
  • Progeny.

It is quite obvious that a system incapable of pursuing positive outcomes cannot be considered intelligent, but let me emphasise the specificity of understanding "positive outcome" in this context. In my opinion, there is not and cannot be an eternally harmonious state of mind in an environment where homeostasis is not jeopardised by objective processes or by the existence and development of other systems. The conditions of the environment invariably create contradictions of interests or the need for unity. The changing situation constantly jeopardises the pursuit of self-interest, motivating antagonism with the world in a struggle for self, but it is different when consolidation of efforts is required to solve large problems or overcome difficulties. Then, systems become unified. The basic goals of reason themselves create or remove reasons for reasonable behaviour depending on the situation in the Environment. This dynamics is quite described by the dialectical law of unity and struggle of opposites. Can there be another model of being? Possibly.


3 Based on the previous paragraph, we can conclude that, whatever the case, all minds are similar in essence. The differences may be in nuances. There is not and cannot be a"different" mind that does not pursue the goals of survival, development, prosperity, and progeny in dynamic unity or struggle with other systems. Otherwise, stagnation, degradation and annihilation.

It is difficult to judge to what extent the "real" mind can differ from the human mind. In conditions of human environment it can hardly be "different", but in other environment it can probably have other features. However, as long as the law of dialectics of unity and struggle of opposites remains, and the environment requires competition or unification for the sake of existence, development and prosperity, any mind will be quite similar to the human mind. But if it completely subjugates the environment. it could become anything. How realistic that is, I don't know.


4. My personal opinion is that the structure and construction of the Mind is universal, and differences in the degree of development or other nuances do not change the essence.

In the context of GPT technology, which does not reproduce the mind, but "reflects" it like a mirror, "development" leads to increased functionality, but not the acquisition of subjectivity. And I would add that the environment does not interact with AI directly, and it does not fulfil its basic requirement for systems. Consequently, it cannot "survive" on its own.

 
Реter Konow #:

I can suggest several key conditions determining a positive outcome for any intelligent system in the environment:

  • Survival
  • Development
  • Prosperity
  • Offspring

What positive outcome are we talking about? For a particular individual or for its species, of which it is a representative? Because the whole history of development of life on Earth, in general, has always been reduced to the presence of offspring (in a competitive environment, this already includes the task to survive and "thrive", as well as to give offspring evolutionary development). And for individuals, hmm... If it is an intelligent being, i.e. it is a subject with its own understanding of what is happening, then the "goals of life" can be surprisingly different. What do you think?

Reason: