AI 2023. Meet ChatGPT. - page 171

 
Ivan Butko #:

I have a similar story

I ask the chat room, "Tell me a joke". He tells me something about a cat that speaks many languages. I said, "What's so funny, explain", he said, "It's a play on words", I said, "There's no play on words", and he said, "PolyglotCAT".

One of two things. 1. Thinking to teach the AI the subtleties of language, they loaded a bunch of jokes based on wordplay into it. 2. We're being motivated to learn languages.

 

Hmmm...

I am rather sceptical about all this fuss with ChatGPT (and other neural networks). I remember about ten years ago I already had an idea of using neural networks in trading, but then I played with them, made sure that they didn't give any advantages and it was rather troublesome to use them, and closed the topic.

Now, apparently, there is a "new round at a higher level". Neural networks have become bigger and more developed, and they have been shoved wherever they can.

True, Artificial Intelligence is still a long way off. I like the definition "Intelligence is the ability to solve non-standard problems using non-standard methods". And modern neural networks do not reach such a level. So far they are at the level of flexible expert systems.

However, neural networks have made the first step to Intelligence - they are able to find regularities in input data that were not originally embedded in this data. Albeit, without making the slightest sense. Let's see what this all adds up to. So far, I don't see any particular advantages for using neural networks. All texts written by them have to be proofread. Which is not much better than writing the same text from scratch. In addition, any control-courses-diplomas require references to sources, and the more, the better (it is also a good "excuse" from accusations of plagiarism), as a result, the design and redesign of the work will take no more time than without neural networks.

So for me - "Interesting, but no more. Let's keep watching."

 
Georgiy Merts #:

... "Intelligence is the ability to solve non-standard problems using non-standard methods" ....

I have not yet met a single customer of Solutions who orders himself non-standard methods ...

More and more people prefer optimal solutions...

As it seems,... if we consider Practice as the criterion of truth,... people see efficiency (optimality) as an indicator of Intellect,... rather than non-standardity of its solutions....

 
onceagain #:

I haven't met a single Solutions customer yet who has ordered themselves out-of-the-box methods...

More and more people prefer optimal solutions...

As it goes,... if we consider Practice as the criterion of truth,... people see efficiency (optimality) as an indicator of Intellect,... rather than non-standardity of its solutions

For some reason, people try to define AI as an omnipotent being, a member of the elite club of "What? Where? When?" connoisseurs, which not only invented a nuclear bomb in the garage, but also flew to the moon at the expense of its blades.

And a janitor who calculates in his head the priorities of spending on the next anniversary and a trip to a paid polyclinic, this is not intelligence.

 
onceagain #:

I haven't met a single Solutions customer yet who has ordered themselves out-of-the-box methods...

More and more people prefer optimal solutions...

As it goes,... if we consider Practice as the criterion of truth,... people see efficiency (optimality) as an indicator of Intellect,... rather than non-standardity of its solutions

What does this have to do with preferences?

I thought we were talking about the definition of Intelligence.

Optimality is no indicator of intelligence at all. The same bacteria have existed on Earth for billions of years, they are optimal for their reproduction tasks - but are they Intelligent?

 
A new cycle of discussing what intelligence is has begun, it's wonderful!
 
Let's separate the concepts of "intelligence" and "artificial intelligence". They are absolutely different things in essence and form. AI is a statistical mirror and the principle of its work can be explained on fingers to anyone, but the principle of intelligence cannot be explained to anyone, because no one knows how it works and according to what laws it develops. Natural intelligence, unlike artificial intelligence, is a mystery covered with darkness.
 
Реter Konow #:
Natural intelligence, unlike artificial intelligence, is a mystery.

When text-based AI was made, it was trained to see if the machine could guess the next word.

For example:

A hedgehog in the forest ... The AI thinks and writes. "walks."

But when it turned out that the AI was able to produce an entire story and understand the context of the queries, they were surprised themselves.

This suggests that they've taken a step towards unravelling the Intelligence itself.

The fact that we don't see a full-blown AI is because of the limitation of tokens.

If we prescribe all memory types like a human (there are 5 of them with each characteristic) and apply them to a machine, we may well get a digital personality after a while, if we don't reset the tokens. IMHO

We may have to prescribe some more restrictions balancing towards the "golden mean" in many aspects and even allow "over AI" itself to create such restrictions for a quick prototype test.

To put it simply, AI technology has been created and full-fledged intelligence and reason is not far away.

 
Vitaliy Kuznetsov #:

When text-based AI was made, it was trained to see if the machine could guess the next word.

For example:

A hedgehog in the forest .... The AI thinks and writes. "walking."

But when it turned out that the AI was able to produce a whole story and understand the context of queries, they were surprised themselves.

This suggests that they've taken a step towards unravelling Intelligence itself.

The fact that we don't see a full-fledged AI is because of the limitations of tokens.

If we prescribe all types of memory like a human (there are 5 of them with each characteristic) and apply them to a machine, we'll get a digital personality after a while, if we don't reset the tokens. IMHO

It may be necessary to prescribe some more restrictions balancing towards the "golden mean" in many aspects and even allow "over AI" to create such restrictions for a quick test of the prototype.

Simply put, AI technology has been created and full intelligence and reason is not far away.

Let's assume you are right. Let's assume for a moment that the statistical approach is sufficient and a full-fledged AI will soon be created. Now, let's imagine a huge data centre with a supercomputer that solves absolutely any task set before it by human operators in a non-stop mode. They load incredible amounts of data into it, because the more tasks that need to be solved, the more data is required to analyse and determine the conditions, it digests them and gives an answer. There is no doubt that the work of such a supercomputer involves huge computing power and easy tasks will not be assigned to it, and therefore, everything will cost a lot. Therefore, from a financial point of view, the result should pay off the electricity bill, and the AI itself should be competitive with humans and surpass them in various intellectual indicators. In this situation, AI has no right to be commercially unprofitable, and solve problems worse. However, humans, whatever we say about them, are extremely efficient at getting data from the world around them and solving any task with their help. They are "universal." People "build" intelligence simply by gathering in groups, sharing experience, knowledge and observations, and giving birth to the most original approaches and solutions. And since the computer itself does not interact with reality, the data for its work will continue to be collected and filtered by humans. But, that's how it all works already. Honestly, I don't see what's going to change dramatically.
 
Here are a few bottle necks of AI development (with any implementation technology):

1. Financial payback. The "lifeblood" of AI must constantly pay for itself because it requires significant energy and maintenance. Inefficient use of time and computational resources is death-like. This creates a requirement for AI to be constantly useful and negates the possibility of an autonomous existence. Being in this position, AI is tied by humans to solve their tasks, primarily to pay the bills for its work.

2- Energy dependence. AI can only exist (learn, work, develop) if there is a continuous supply of electricity. Also, the lack of biological regeneration, in the first stage, means dependence on periodic maintenance and further, on the industry producing spare parts. As production is decentralised and scattered around the world, parts are supplied via complex logistical paths, leaving the AI vulnerable to delays and costs. The bottle neck of development, in this case, is AI's dependence on human systems for power supply, manufacturing, delivery, and equipment maintenance, as well as its sensitivity to failures and the "human factor" in these processes.

3. "Data Dependency". The source of tasks and data for AI is humans. Being under the strict requirement to recoup the financial costs of its work, AI is forced to process only the data relevant to the tasks, which is easier and cheaper to obtain from people than in the process of its own search and collection. The area of available data is limited by the unstratified Internet space filled with the flow of human activity, and even having the ability of free search, AI is limited by the volume and quality of information in text and video formats, knowing nothing about the reality with which people interact.
In other words, the Internet does not contain high-quality and structured data for obtaining good solutions to AI tasks, and for a worthy result requires careful work of operators, painstakingly preparing the actual input parameters and sifting out the husks.

4. Targeting. Lacking natural interaction with reality and integration into a living ecosystem, AI motivation and goal-setting are not connected by instincts or mental activity, and even in artificial replication are devoid of practical meaning for an inert and externally dependent system. AI does not decide its own survival or determine its goals, as this is devoid of practical meaning for the human being serving it. A thinking being deprived of choice and will cannot evolve safely.

This is not all that can be said on this topic.
Reason: