You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
It's not imitation, it's the VNR itself. I'm not talking about imitation at all, because that's what imitation is. The speed of what? All this excitement about AI since 1960, and all you get are calculators.
AI is all well and good for humanity:
While it's being invented - lots of jobs for inventing it.
When it is ready - everyone will be busy surviving and defending against it, there will be no unemployment ))))
A US Air Force pilot will virtually fight an AI-controlled fighter jet
Spectator registration is open until the 11th.
I can't understand these damn neurons and networks. In the global concept they look like screws and hammers in Kant's philosophy: "The mind knows the thing in itself, but the screw works like this...")
Understanding how a computer works is similar cognitive problem (for common people, not design engineers) - there are transistors, there are over a million of them, they are connected to each other in intricate way, but how it all works together is impossible to imagine...
The most difficult thing is to understand how the magical sense of "self" is created by primitive neuronal signals, but I think this puzzle will gradually be solved, it is possible that it is just a "trivial" self-recognition ...
In synergetics it is called emergence of an inherent property of the system as a whole, but not present in its elements separately, it is impossible to reduce this property to the properties of the elements, like the sum of bricks is not a house.
Understanding how a computer works is a similar cognitive problem (for ordinary people, not design engineers) - there are transistors, over a million of them, they are connected to each other in a tricky way, but how it all works together is impossible to imagine...
The most difficult thing is to understand how the magical sense of "self" is created by primitive neuronal signals, but I think this puzzle will gradually be solved, it is possible that it is just a "trivial" self-recognition ...
In synergetics it is called emergence of emergence of a property inherent in the system as a whole, but absent in its elements separately, it is impossible to reduce this property to the properties of the elements, like the sum of bricks is not a house.
A child's brain starts to differentiate things, including itself from not itself, at some time. That is when it appears. Before that, it doesn't understand anything.
The neuronal connections are clustered. Everything about his body has stronger connections that are associated with the self. Everything else is in other clusters.
This happens naturally through the reinforcement of certain connections, in the process of information accumulation.Understanding how a computer works is a similar cognitive problem (for ordinary people, not design engineers) - there are transistors, over a million of them, they are connected to each other in a tricky way, but how it all works together is impossible to imagine...
The most difficult thing is to understand how the magical sense of "self" is created by primitive neuronal signals, but I think this puzzle will gradually be solved, it is possible that it is just a "trivial" self-recognition ...
In synergetics it is called emergence of emergence of a property inherent in the system as a whole, but absent in its elements separately, it is impossible to reduce this property to the properties of the elements, like the sum of bricks is not a house.
the robot's analogue self is the neural network layer, which is responsible for analysing the state of internal sensors (coolant temperature, battery charge, etc.)
Since it is the main layer, the other signals are somehow or other related to it. For example, it cannot carry out any work without first analysing its condition - whether the battery charge is sufficient, etc.
Now let's imagine that it still has a feedback to its self. That is, when the signal is in the 'self' layer of the neural network, it is reinforced by the release of additional lubricant, making its members function more efficiently and receiving feedback through the lubricant quantity sensors. "Knowing" this, it turns to its self more often, again receiving a portion of the "joy hormone" to perform everyday tasks.
Sometimes the lubricant runs out and there is nowhere to buy it. Then the robot crunches its joints and cries at night, prone to suicide, as its self ceases to experience positive reinforcement and it begins to disintegrate in its consciousness, as the functionality declared by the manufacturer does not match the current reality. He may have a split personality and become glitchy.About 15 years ago, there were such programs - nonsense generators, in fact this GPT-3 is the same nonsense generator, only very advanced, which can generate something on any given topic.
In order to write something meaningful idea one should have a clear model in his head, while here the texts are generated on the basis of probabilistic verbal environment, as a result you get fuzzy texts in which a man guesses his own idea, like in the clouds he guesses the shapes of animals.
The most obvious application of this text generator is in the writing of huge philosophical posts on forums with weakly expressed ideas.
why not... not at all.
The engine itself is not very interesting, but as an interface between a real AI (if it will be created one day) and a human it is very good, i.e. it is an analogue of the brain's speech centre.