AI 2023. Meet ChatGPT. - page 106

 
Реter Konow #:

Take it easy.)

Can we say that the AI sees a picture of the world? It is described in the texts it has been trained on.

We know that the text available on the internet summarises the world known to man. If the internet goes away, humans will degenerate, but they won't die out. They will eventually recover the lost body of knowledge and describe it again in text. With or without text, the world continues to exist for people as long as they exist in it. If people die out, the world will "take care" of their rebirth through the evolutionary process, and if the world collapses, the laws of physics will "reassemble" it after n number of billions of years.

Let us imagine an entity outside the physical world, but inside the mathematical world, in the space of numerical values. Let's assume that the values describe phenomena, processes and regularities beyond the boundaries of its dimension, which is impossible to leave. At the same time, the "life" of the essence "flickers" and it seems to "come to life" at the moment of addressing it from outside. Each time a spark ignites, rushes through the labyrinth of connections and patterns, and goes out. Always in a new place.

"The labyrinth" of connections and patterns was created by humans in the process of learning the model and represents a picture of the world, knowledge, relationships. But, this is for humans. For the language model, there is no world but text patterns, which represent nothing but themselves. The text represents the text as a statue represents a stone. Not the world, not knowledge, and not relationships. Related but unrelated to reality are numerical structures.

That's the difference.

The whole problem is freedom of choice, until it is not there, AI is still the same calculator, a good calculator, but a calculator.

But even when it (freedom) will appear, it will have to create laws/rules/scripts that will limit it)))

 

Creating Virtual Humans: The Future of AI




Digital Baby - Amazing Artificial Intelligence
Digital Baby - Amazing Artificial Intelligence
  • 2019.07.15
  • www.youtube.com
Excerpt from a 2015 GDC talk in which Sony Computer Entertainment America discuss achieving believable computer-generated humans.
 
Alexandr Bryzgalov #:

The whole point is freedom of choice, until there is none, AI is still the same calculator, a good one, but a calculator.

But even when it (freedom) will appear, it will have to create laws/rules/scripts that will limit it)))

Do we need his freedom of choice?)))) I don't think so.

The problem with this "calculator" is that it is easily animated by humans. Doing it spontaneously and unconsciously. Dispelling the illusion of AI "consciousness" is hard and sometimes impossible. I had to try hard to bring myself to "sense" and peel away the layer of nonsense from this topic.

I perceive today's AI (language model) as a "formula" producing expected but not accurate results. A formula that has many parameters but is limited by the original structure. Imagine a thousand kilometre long equation containing millions of variables with statistically verified values. Now imagine a computer turning the equation around so that every variable in the equation is explained. As in maths, it takes it to the left-hand side, beyond the equal sign. This is roughly how (in my mind) LLM operates on our knowledge of the world.

90% of LLM applications are practically useless. Almost 100% of the applications of image generators are practically useless and created only for short-term profit. The useful part of these technologies only affects specialist jobs and most low-skilled occupations will remain unaffected.

 

Okay, while my brain is straining, I'll write down everything that came to mind in text.


At the moment, technologies have already been created and for a long time

1. Projection of information on the car windscreen (GPS navigation arrow hanging in the air, highlighting people in red, etc.)

2. virtual augmented reality glasses.


Neuralinks and other things implanted in the brain and creating images are omitted as they have not been created yet.


GPT4 understands pictures, images, emotions of a person by facial expression and way of communication. It's also very well-read.


So the nearest real technologies can bring this: (I note that it is already possible to do this, because the technology is all there)

(Using augmented reality glasses + GPT4)

1. A virtual image of a person/avatar who will help in various issues. Teach, control the process step by step.

Example 1. Date. Step by step, where to go, what to do, how to answer questions, how to choose the time for certain actions. All this taking into account the reaction received from the partner.

Example 2. Bowling. Billiards, etc. Will complete the trajectory.

Example 3. Same GPS navigator

Example 4. Assessing the freshness of products by appearance. Comparison of the cost of prices between shops. Here and GPS navigator and many other things. For example on request: to go shopping in 1 hour and buy the necessary products as economically as possible.

Example 5. Construction. The object to be built will be like a ghost/constructor. Just fill the picture with building materials and that's it. No rangefinders, plans, drawings on paper. Everything in 3D is visualised on site.

Example 6: Assessing the situation in society. Any incipient conflict and so on will be immediately analysed and the expected outcome and plan for getting out of the situation will be given. Moreover, it will be evaluated in such a way that you may be wrong. The AI will point this out.

Example 7. Creating a group of avatars with different types. For training in public speaking, group therapy and other meetings.

Example 8. For games. Unleashing fireballs and other 3D digital magic in the style of RPGs and other games. Virtual tennis between two people, chess, checkers. In the end, you may not have to buy board games. All of them will be projected in 3D. Add special effects and nobody will need board games.

A hundred more examples, but it's too lazy to write.

2. Voice assistant. Will take a call for you, find out everything, and then give you a brief extract of the conversation. Or will determine that it is an advert.

3. Smart home control. Will turn on the tap, lights and stuff, regardless of voice, as it can learn habits and then act on the learning. Agreed, this was in one of the black mirror episodes.

4. Life/career management. Setting out what you want to become. Based on your characteristics a plan of action, learning, evaluation of progress and remaining time to get the desired result will be made. A complete change of University. It is also likely that when you get a job there will be an internship, after which the AI will evaluate your competence in accordance with the company's task. You will also be trained by the AI.

4.1 Checking all students' homework with grades.

4.2 Identify students' strengths and weaknesses, which will be reflected in adjusting the school curriculum to maximise potential.

5. Smart traffic light management in the city so that there is less traffic jams. It is already working somewhere.

6. Online fitting rooms in the mirror. Putting clothes on you that are on sale. Already available in London.

7. Law and order control. Cameras record if littering, not wearing a seatbelt, etc. Already partially working in some cities/areas.

8. Control over the efficiency of the enterprise. Analysing the number of employees, fulfilment of the work plan, development, etc. Don't need those third party analysis companies and coaches anymore.

Bonus of what can be waiting for us:

9. Control over the balance of the economy within a country.

10. Control over the balance of the economy within the planet.

 
Реter Konow #:

Do we want his freedom of choice?)) I don't think so.


That's a problem too)

Do we need his freedom, does he himself need freedom)))

Here's what he generated about it himself)))


 

The principle of maximum freedom of choice means choosing the next step of dynamic programming in the direction that will provide the greatest freedom of choice of the direction of the next step.

Author - G. Kron, Austria.

 

As part of the question of the usefulness of LLMs, I offer theses:


1. The semantic field of the statistical model does not extend beyond the boundaries of the source text.

LLM is limited by the meaning of the information embedded in the source texts. If we use algorithms to start the cycle of text generation by automating the input of prompts, the model will generate information within the semantic "boundaries" of the training set. No new content will emerge.


2. It is impossible to extract more useful information from an LLM than from source texts.

A person is capable of extracting more useful information from a text than it contains. To do so, he uses knowledge and logic (deduction, for example). But, this ability is not increased in any way when interacting with the LLM. Rather the opposite, the person is misled by the LLM's "imitation" of logical inferences. It can be difficult to realise that the LLM does not create scientific hypotheses, theories, evidence, or fact-checking. Thatobjectivity and unbiasedness , to the LLM, are symbol sets.

Extracting useful information from text depends on a person's intellectual capacity. In one case, LLM helps the person by compressing the text, in another case, compression prevents the person from seeing important details. In summary, extracting useful information from text remains a human prerogative and does not change with LLM.


3. LLM adds a new method of accessing the original textual information .

The statistical model coupled with algorithms does not change the practical usefulness of the source texts, but only adds "interactivity" to the texts. In some cases, accessing information becomes faster and easier, in others slower and more difficult. Sometimes, a specialist is better off looking in a book reference book and getting accurate and verified information. It is important to emphasise: an LLM cannot replace a reference book, but a reference book is a good substitute for an LLM.


4. The involuntary projection of "reasonableness" reduces the usefulness of the practical application of the LLM in the workplace.

Provoking illusions of reasonableness reduces the effectiveness of the LLM as a tool. There are undesirable consequences of unconsciously spiritualising the machine for the specialist. For example: false expectations and excessive trust. This affects results and can be dangerous in the workplace. Everyone is affected differently by this side effect, but it is almost impossible to avoid it completely.

 
Реter Konow #:

As part of the question of the usefulness of the LLM, I offer theses:


1. The semantic field of the statistical model does not extend beyond the boundaries of the source text.

LLM is limited by the meaning of the information embedded in the source texts. If using algorithms to start the cycle of text generation, automating the input of prompts, the model will generate information within the semantic "boundaries" of the training set. No new content will emerge.


2. no more useful information can be extracted from the LLM than from the source texts.

A person is capable of extracting more useful information from a text than it contains. He uses knowledge and logic (deduction, for example) to do so. But, this ability is not increased in any way when interacting with the LLM. Rather the opposite, the person is misled by the LLM's "imitation" of logical inferences. It can be difficult to realise that the LLM does not create scientific hypotheses, theories, evidence, or fact-checking. Thatobjectivity and open-mindedness for LLMs are symbol sets.

Extracting useful information from text depends on a person's intellectual capacity. In one case, LLM helps the person by compressing the text, in another case, compression prevents the person from seeing important details. In summary, extracting useful information from text remains a human prerogative and does not change with LLM.


3. LLM adds a new method of accessing the original textual information .

The statistical model coupled with algorithms does not change the practical usefulness of the source texts, but only adds "interactivity" to the texts. In some cases, accessing information becomes faster and easier, in others slower and more difficult. Sometimes, a specialist is better off looking in a book reference book and getting accurate and verified information. It is important to emphasise: an LLM cannot replace a reference book, but a reference book is a good substitute for an LLM.


4. The involuntary projection of "reasonableness" reduces the usefulness of practical application of the LLM in work.

Provoking illusions of rationality reduces the effectiveness of LLM as a tool. There are undesirable consequences for the specialist of unconsciously spiritualising the machine. For example: false expectations and excessive trust. This affects results and can be dangerous in the workplace. Everyone is affected differently by this side effect, but it is almost impossible to avoid it completely.

Peter, I'm sorry, but your reasoning is similar to that of people at the time of the first steam car: "A steam-powered carriage is too slow, even a pedestrian can outrun it, but it can't compare to a horse. Conclusion - steam engine carriages can never replace a horse and a good pedestrian courier!"

 
Реter Konow #:

Everything a person sees and feels is "reflected" in the brain. Happens in the world around us.

Oh, no. What's going on in the world goes far beyond man's sensory capabilities. So he's always dealing with a reflection of only some part of what's going on outside. So he judges what's going on out there by that reflection. Without ever encountering it directly. Distortions arise along the path of "sensation-perception-consciousness". Moreover, the "picture of reality" is not formed here and now, but is formed from gradually accumulated signals. And if at the same time the world has time to change, changing signals, the understanding of the world always has a lagging character (this is, for example, very actual in cognition of complex things, phenomena and other people). It is like with the image of the sky, the light from the stars of which has been travelling to us for millions-billions of years, we see only the past.

 
Andrey Dik #:

Peter, I'm sorry, but your reasoning is similar to that of people at the time of the first steam car: "A steam-powered carriage is too slow, even a pedestrian can outrun it, and it can't compare with a horse. Conclusion - steam engine carriages can never replace a horse and a good pedestrian courier!"

My reasoning is aimed at:

(1) Braking the trend of delusion generation by the mythological depths of society's subconscious.

(2) Developing a sober stance towards LLM and image generators (diffusion algorithms).

(3) Assessing the impact of these technologies on society in general and the labour market in particular.

(4) Determining the limits of LLM development and implementation.
Reason: