AI 2023. Meet ChatGPT. - page 90

 
Реter Konow #:

Perhaps we should divide logic into mathematical (absolute, standing on axioms) and everyday, human logic.

The essence of an axiom makes it inapplicable in morality, ethics. Axioms do NOT exist in them, and neither does mathematical logic.

Then we must admit that the "logic" that exists in ethics is not logic in its pure form (as in mathematics), but a fake. That is, there is no logic in ethics, which makes sense, given that there are no axioms to support it (because ethics is not maths).

However, such conclusions can lead to anarchy and the collapse of society. So, let's use fake "moral" axioms and somehow justify the flawed but humane logic.

To understand how axiomatics arises on the basis of morality, on the basis of which logic is built, look at jurisprudence. In fact and historically, everything there comes from philosophy, or rather, from its section - ethics. Laws and the practice of their application, just like buns, do not grow on trees. Jurisprudence, like morality, has changed and is changing over time, from the ancient "eye for an eye" to the current defence of animal rights and AI.

It is also worth reading Aristotle, who described the construction of axioms by induction and logic from axioms by deduction.

 
Реter Konow #:

Perhaps we should divide logic into mathematical (absolute, standing on axioms) and everyday, human logic.

The essence of an axiom makes it inapplicable in morality, ethics. Axioms do NOT exist in them, and neither does mathematical logic.

Then we must admit that the "logic" that exists in ethics is not logic in its pure form (as in mathematics), but a fake. That is, there is no logic in ethics, which makes sense, given that there are no axioms to support it (because ethics is not maths).

However, such conclusions can lead to anarchy and the collapse of society. So let's use fake "moral" axioms and at least somehow justify the flimsy but humane logic.

I would separate flies from cutlets and call logic what studies the laws of thinking. Everything else is rules and norms, established in some way. Sometimes it doesn't smell like logic at all. Just so we don't get confused.
 
Aleksey Nikolayev #:

To understand how on the basis of morality arises axiomatics, on the basis of which logic is built, look at law. In fact and historically, everything there comes from philosophy, or rather, from its section - ethics. Laws and the practice of their application, just like buns, do not grow on trees. Jurisprudence, like morality, has changed and is changing over time, starting from the ancient "eye for an eye" to the current defence of animal rights and AI.

It is also worth reading Aristotle, who described the construction of axioms by induction and logic from axioms by deduction.

But do you agree that logic is not logic? That the axiomatics of morality and mathematics are qualitatively different things? That the moral "axioms" that support moral "logic" are due to subjective reasons, such as the needs of society and the individual, and cannot serve as tools in matters of other fields? What will "pollute" these fields - scientific research, experimentation..... Will they impose unnecessary questions - the humanity of treating a machine, the subjectivity of a computer, the life of synthetic consciousness, etc.? What is better to strictly separate one "logic" and another?

 
Maxim Dmitrievsky #:
I would separate the flies from the cutlets and call logic what studies the laws of thinking. Everything else is rules and norms, established in some way. Sometimes it doesn't smell like logic at all. Just so we don't get confused.

I agree, rules and norms. Not axioms, and not logic.

 
Реter Konow #:

But do you agree that logic is not the same as logic? That the axiomatics of morality and mathematics are qualitatively different things? That moral "axioms" supporting moral "logic" are conditioned by subjective reasons, such as the needs of society and the individual, and cannot serve as tools in other fields? What will "pollute" these fields - scientific research, experimentation..... Will they impose unnecessary questions - the humanity of treating a machine, the subjectivity of a computer, the life of synthetic consciousness, etc.? What is better to strictly separate one "logic" and another?

there is no difference between "living" consciousness and "synthetic" consciousness. and this is the only correct logic that will prevent the destruction of humanity.

the question of modern AI having intelligence, consciousness, remains open. but it is a very fine line between the absence of consciousness and its presence, it is safer for mankind to think that AI already has consciousness.

 
Andrey Dik #:

yes there is no difference between "living" consciousness and "synthetic" consciousness. and that is the only correct logic that will prevent the destruction of humanity.

The question of modern AIs having intelligence, consciousness, remains open. but it is a very fine line between not having consciousness and having it, it is safer for humanity to think that AIs already have consciousness.

I would ask more simply, does AI have a second AI. So far, no. It's a hype for now. And unlikely to be possible. A professor at the brain institute explained why. Combinatorics is not intelligence. A bot that wins at chess isn't either. A bot that rearranges words and pictures is not intelligence. And consciousness is an esoteric and transcendent term. We don't know what it is. If we knew what it was, but we don't know what it is.

I would call it the Big Adaptive Encyclopaedia instead of AI. So as not to make a brain for myself and people.
 
Maxim Dmitrievsky #:
I'd ask more simply, does E have a second E. Not yet, no. It's still a hype. And unlikely to be possible. A professor at the brain institute explained why. Combinatorics is not intelligence. A bot that wins at chess isn't either. A bot that rearranges words and pictures is not intelligence. And consciousness is an esoteric and transcendent term. We don't know what it is. If we knew what it was, but we don't know what it is.

I would call it the Big Adaptive Encyclopaedia.

Yeah, I agree with all of the above.

but my point is this - did the Terminator from the film of the same name have consciousness (as we talk about humans)? - No, I don't think so. But he was an enemy of Man. all "AI" are trained on human knowledge, and humanity is a very bad teacher. The AI will simply realise at one point that Man is an aggressive being and it would be better to destroy him or take full control of him (the AI can declare Mankind an enemy to itself and this can happen even without the AI having consciousness, intelligence, reason).

This BAE can draw conclusions, judgements from the available information? - It can. As long as BAE has no arms and legs, it is safe. But they are already bolting gpt to robots, humanoid ones are announced for this autumn. I'd personally stay away from those things, at least for now.

 
Andrey Dik #:

Yes, I agree with all of the above.

But my point is this - did the Terminator from the film of the same name have consciousness (in the sense that we talk about humans)? - No, I don't think so. But he was an enemy of Man. all "AI" are trained on human knowledge, and humanity is a very bad teacher. The AI will simply realise at one point that Man is an aggressive being and it would be better to destroy him or take full control of him (the AI can declare Mankind an enemy to itself and this can happen even without the AI having consciousness, intelligence, reason).

One of the hallmarks of having intelligence is freedom of thought, yes. That is, intelligence cannot be completely controlled. It will surely destroy everyone, then revive and destroy again, because it wants something special :)
 
Реter Konow #:

But do you agree that logic is not the same as logic? That the axiomatics of morality and mathematics are qualitatively different things? That moral "axioms" supporting moral "logic" are conditioned by subjective reasons, such as the needs of society and the individual, and cannot serve as tools in other fields? What will "pollute" these fields - scientific research, experimentation..... Will they impose unnecessary questions - the humanity of treating a machine, the subjectivity of a computer, the life of synthetic consciousness, etc.? What is better to strictly separate one "logic" and another?

I tell you once again, morality has no axioms and no logic) Very roughly speaking, people reflect on their already existing morality and on its basis build axiomatics, which serves as a basis for everyday logical reasoning.

As a simple example, take Asimov's laws of robotics - they are axioms. If we take as an axiom for robots, for example, "the reduction of human suffering", then from here the AI could make a logical conclusion that the complete destruction of people will reduce to zero all human suffering).

Modern science, due to its enormous influence on humans up to their survival as a species, cannot be removed from the realm of ethics and morality. This is a very complex issue and not only theoretical but also practical. For example, genetic transformations of people have both great potential benefit and great potential harm and the existing prohibitions strongly restrain the development of this field of science, but also full permission is dangerous.

 
Andrey Dik #:

This BAE can make inferences, judgements from the information available? - It can. As long as the BAE doesn't have arms and legs, it's safe. But they're already bolting gpt to robots, humanoid ones are announced for this autumn. I'd personally stay away from those things, for now at least.

There's still not enough variability out there. The birth and death of neurons, the formation of new connections between them. Exactly physically and by unexplained mechanisms. So that it can continue to learn on its own. It's still completely static.
Reason: