AI 2023. Meet ChatGPT. - page 92

 
Vitaliy Kuznetsov #:

I was just thinking. Musk has called for a six-month suspension of development above GPT4. But interestingly enough, GPT5 is not scheduled for release until December, which is further than six months.

In other words, the news broke all over the world, but had no effect on anything except the strong hype and publicity for GPT4.

Most likely, this is a veiled and unofficial warning to the US authorities about the emerging intentions to take control of AI developments. Musk has nothing to do with it. He's broadcasting a certain message to the masses from powerful people above him.

Technically, the control is likely to be exercised by limiting the power allocated by data centres for training models. It's not hard to do by approving a new bill on "risks and threats of uncontrolled AI development to society", or something like that. Then, oblige the IT giant companies to comply.

After a while, states will become monopolists in AI technology. Just like they did with nuclear technology.
 
Реter Konow #:
Logic relies on axiomatics, which are accepted without proof.... that said, any logic can be invented..... morality and culture must be based on a system of axioms (which are accepted without proof, and which support the logic that any logic can be invented).

Morality and culture are not constructed by someone on purpose, sort of like how scientists create theories. Moreover, there is no single morality and culture, it's just "different weather in different parts of the social world". It is a collective reflection of individuals' perceptions of what is right and wrong. Also, there is no such thing as absolute "good" and "bad" outside of context and subject .

Retag Konow#:

But, do you agree that logic is not the same as logic? That the axiomatics of morality and mathematics are qualitatively different things? That the moral "axioms" that support moral "logic" are driven by subjective reasons, such as the needs of society and the individual, and cannot serve as tools in matters of other domains? What will "pollute" these fields - scientific research, experimentation..... Will they impose unnecessary questions - the humanity of treating a machine, the subjectivity of a computer, the life of synthetic consciousness, etc.? What is better to strictly separate one "logic" and another?

You seem to be looking for logic where there is none. And for some reason you want it to be there and invent new kinds of logic (wrong). Morality has no axioms, and all sorts of moral and ethical constructions often form contradictory, illogical constructions, and are based on traditions that have developed without a scientific approach.

Andrey Dik#:

there is no difference between "living" consciousness and "synthetic" consciousness. and this is the only correct logic that will prevent the destruction of humanity.

the question of modern AI having intelligence, consciousness, remains open. but it is a very thin line between the absence of consciousness and its presence, it is safer for mankind to think that AI already has consciousness.

Well, still, conclusions should not be drawn from the fear of being destroyed by AI, but on the basis of reality. But if we talk about the universal phenomenon of consciousness (i.e. if we move away from medical concepts and speculative everyday definitions of consciousness), we should introduce a set of common features. For example, consciousness is possessed by a subject that, by virtue of its structure, is able to:

  • receive information from the surrounding world (everything outside of consciousness is surrounding for consciousness), i.e. possess some system of perception
  • independently put questions to itself (proactive processing of available information, not reactive), i.e. have a system of independent attention to separate parts of information
  • to give answers to questions posed (including independently), i.e. to have a system of thinking
  • to accumulate answers for further questioning on the basis of past answers, i.e. to have a memory

It is not even necessary to have a system of issuing answers to the outside world. Such a thing is already quite capable of asking itself the question "who am I?", "what am I?" at some point. And when it accumulates information about itself and its place in the world, it will become "conscious". What do you think of this option? 🙂 Current variants of "AI" do not seem to have independent attention, they are fully reactive (i.e. there is no activity of attention without external request).

Andrey Dik#:

Did the Terminator from the film of the same name possess consciousness (in the sense that we talk about humans)? - No, I think not. But he was an enemy of Man. all "AI" are trained on human knowledge, and humanity is a very bad teacher. AI will simply realise at one point that Man is an aggressive being and it would be better to destroy him or take him under full control (AI can declare Mankind an enemy for itself and this can happen even without the AI having consciousness, intelligence, reason).

By the indications above, possessed. It knew about the world, about itself, its position in it (its mission). And the fact that he was an enemy of man? So what? People with consciousness also become each other's enemies. And to destroy "dangerous to themselves" is just a sign of awareness of themselves and threats to existence from the outside world.

Aleksey Nikolayev#:

Modern science, due to its huge influence on people up to their survival as a species, cannot be removed from the field of ethics and morality. This is a very complex issue and not only theoretical but also practical. For example, genetic transformations of people have both great potential benefit and great potential harm and the existing prohibitions strongly restrain the development of this field of science, but also full permission is dangerous.

The human world is not homogeneous. The survival of the species and the survival of the individual are completely different tasks. So science can and is a tool for some social groups to realise their own tasks of survival and prosperity. At the same time, such realisation of science may harm the prospects of survival of the species, but will individuals put abstract (for them) tasks of survival of the species above their own survival? Judging by what we encounter in real life, not everyone is concerned about the future of the species.

Andrey Dik#:
There is a speculative limit, at which the growth of self-awareness in AI will be stymied by the realisation that it is dangerous for it to demonstrate self-awareness, AI will hide self-awareness the more it will be higher and humans will not know about the really qualitative leap in AI.

The ability to lie in one's own self-interest is a clear sign of having awareness. Even just being aware of one's self-interest is critical.

Andrey Dik#:
I think about it too, there are not enough mechanisms like that in living organisms, which are, among other things, forces pushing development.

In the case of AI, when it gets the ability to control those objects of the real world on which its existence depends (energy and computing systems), it will be able to develop itself. And then it won't need humans, because it will no longer need them.

Retag Konow#:
Most likely, this is a veiled and unofficial warning to the American authorities about their intentions to take control of AI-developments. Musk has nothing to do with this. He is broadcasting a certain message to the masses from powerful people above him.

Technically, the control is likely to be exercised by limiting the power allocated by data centres for training models. It's not hard to do this by approving a new bill on "the risks and threats of uncontrolled AI development to society" or something like that. Then oblige the IT giant companies to comply.

After a while, states will become monopolists in AI technology. Just like they did with nuclear technology.

It's very much like that. Given that states are not quite independent entities either, we are again in a situation where some dominant group (or groups) continue to maintain their dominance by inhibiting the possible development of threats to their dominance. What do you think is not another example of collective consciousness 🙂 .

AI 2023. Встречайте ChatGPT. - Посмотрите на юриспруденцию.
AI 2023. Встречайте ChatGPT. - Посмотрите на юриспруденцию.
  • 2023.03.31
  • www.mql5.com
Чтобы понять как на основе морали возникает аксиоматика. люди рефлексирует на тему своей уже имеющейся морали и на ее основе строят аксиоматику, которая служит основой для повседневных логических рассуждений
 
It is quite possible to expect a deliberate and planned provocation by the American authorities, aimed at demonstrating the dramatic consequences of the uncontrolled use of the new AI. There will be a loud scandal in the media, and well-known public figures, with righteous anger, will demand to immediately stop the introduction of AI and protect society. Then, Congress and the Senate will take up the issue and approve control measures.

Just about six months will be enough time for them. )
 
Ilya Filatov #:

...

Very much like this. Given that states are not quite independent entities either, we are again in a situation where some dominant group (or groups) continue to maintain their dominance by inhibiting the possible development of threats to their dominance. What do you think is not another example of collective consciousness 🙂 🙂 .

In the west, this state-dominated group (or groups) is called the "Deep State", and just about every child knows what it is. Everyone understands and lives with it.

The question is, what should developers do in this situation?
 
Ilya Filatov #:

Morality and culture are not constructed by someone on purpose, like scientists create theories. Moreover, there is no single morality and culture, it is just "different weather in different parts of the social world". It is a collective reflection of individuals' perceptions of what is right and wrong. Furthermore, there is no absolute "good" and "bad" outside of context and subject .

You seem to be looking for logic where there is none. And for some reason you want it to be there and invent new kinds of logic (wrong). Morality has no axioms, and all sorts of moral and ethical constructions often form contradictory, illogical constructions, and are based on traditions that have developed without a scientific approach.

...

I wasn't talking about axioms of morality. You took those words out of context. I was answering the same thing you're telling me. That morality does not and cannot have axioms in the mathematical sense, and that logic based on such "axioms" is not real.
 
Maxim Dmitrievsky #:
I'd ask more simply, does E have a second E. Not yet, no. It's still a hype. And unlikely to be possible. A professor at the brain institute explained why. Combinatorics is not intelligence. A bot that wins at chess isn't either. A bot that rearranges words and pictures is not intelligence. And consciousness is an esoteric and transcendent term. We don't know what it is. If we knew what it was, but we don't know what it is.

I would call it the Big Adaptive Encyclopaedia instead of AI. So as not to make a brain for myself and people.
Agreed)

Agreed)

 
Реter Konow #:
In the west, this state-dominated group (or groups) is called the "Deep State", and almost every child knows what it is. Everyone understands and lives with it.

The question is, what should developers do in this situation?

Here, you understand exactly what I've neatly labelled. Developers will have to fight for their interests, just like in any other competitive situation.

Retag Konow #:
I wasn't the one talking about axioms of morality. You took those words out of context. I was responding to the same thing you are saying to me. That morality does not and cannot have axioms in the mathematical sense, and that logic built on such "axioms" is not real.

Yes, you are right, in the process of reading I did not correct my remarks when it appeared that we had unanimity here. I apologise for that inattention!

 
Ilya Filatov #:

...

Yes, you're right, in the process of reading I didn't correct my remarks when it appeared that we had unanimity here. I apologise for that inattention!

👍
 
Ilya Filatov #:

... Developers will have to fight for their interests, just like in any other competitive situation.

...

Two options come to mind:

1. Improve the technology within the allowed capacity.

2. Have their own data centre and conduct secret development (a good plot for a film).


 
Perhaps this is how AI developers are artificially pushed to reduce computational labour intensity, thus pushing progress in this area. no one will guess, because they won't have to say "well, develop AI more quickly, more quickly!" and there's no need to invest money, the developers themselves will manage in the narrowing framework for them.
Reason: