AI 2023. Meet ChatGPT. - page 41

 
Georgiy Merts #:

This is not a "thinking inertia problem", but a "problem with a wrong condition". What is surprising that the answer to such a problem can be anything?

In particular - the given answer is categorically wrong, because both people were on ONE side of the river (because the opposite side is not specified). And person B could not have got into the boat when person A crossed to the other side.

The point of the problem is to check whether the test person will ask the question about the position of the people relative to the river.
It is clear that both people can be on the same bank, then there is no solution to the problem. if they are on both banks, then the solution is elementary.
and I didn't bother to specify the data in the problem.

this and the problem about the bottle are problems of inertia of thinking, people suffer from this very often, which was noticed today))) it's neither bad nor good - it's just the way it is.
I haven't noticed any inertia in Ii's thinking yet, but his answer doesn't make sense in the problem about the boat. in the problem about the bottle he solved it perfectly.
As we can see, people have difficulties in both tasks, and in only one of them.
We need to ask more non-trivial problems like this.

and, by the way, the answer may not be any answer, but only two answers: 1.no solution 2: regeneration
 
Andrey Dik #:

The point of the task is to test whether the subject will ask the question about the position of people in relation to the river.
It is clear that both people can be on the same bank, then there is no solution to the problem. if they are on two banks, then the solution is elementary.
and did not bother to specify the data in the problem.

Why is it that "the point of the task is to see if the subject will ask the question"?

The point of the task is the answer to the question in the condition. And it is unambiguous "in the general case there is no solution".

Otherwise, we should ask whether the river is shallow - perhaps it can be crossed without swimming, wading. And whether there is no bridge nearby, because we can cross it (we don't even need a boat). And whether there is water in the river at all (and in this case even person A will not be able to cross by boat to the other side).

And other "implicit" conditions. And do you think the test subject should wonder about all of them?

Why, if we have to accept the implicit condition that "the people were on different sides of the river", but we can't accept the implicit condition that "there was another boat nearby"?

The problem deliberately asks the wrong question, which means that either any answer will be correct (or incorrect - as we wish).

 
The most meaningful thing you can do with this AI is to study the model it is based on and think about the possibility of using it (the model, not the AI itself) for trading. It is based on some kind of probabilistic language model, which makes it doubtful that the discussion can be developed in this direction)
 
Georgiy Merts #:

Why would it be that "the point of the task is to test whether the subject will ask the question".

The point of the problem is to answer the question in the condition. And it is unambiguous "in the general case there is no solution".

Otherwise, we should ask whether the river is shallow - perhaps it can be crossed without swimming, wading. And whether there is no bridge nearby, because we can cross it (we don't even need a boat). And whether there is water in the river at all (and in this case even person A will not be able to cross the river by boat to the other side).

And other "implicit" conditions. And do you think the test person should ask about all of them?

Why, if we have to accept the implicit condition that "people were on different sides of the river", but we can't accept the implicit condition that "there was another boat nearby"?

The problem deliberately asks the wrong question, which means that either any answer will be correct (or incorrect - as we wish).

You are wrong. there are a lot of problems with incomplete conditions around. this is the difference between a human and a machine - we literally live in conditions with incomplete input data, in such conditions machines pass, and a human needs to make a decision.

I didn't invent the boat problem, it's a very old problem.

 
Andrey Dik #:

...

I didn't invent the boat problem, it's a very old problem.

That's the point. To really test AI you need new tasks not described in books (which all got into the model). Something non-standard.
 
Реter Konow #:
That's the point. To really test the AI, we need new tasks not described in books (which all got into the model). Something non-standard.

The AI has not solved this old problem either)))) the conditions are not specified, the answer is nonsense.

 
Andrey Dik #:

The AI did not solve this old problem either)))) the conditions are not specified, the answer is nonsense.

And how does it happen that it solves one problem brilliantly and another horribly?)))
 

It is very difficult to understand what the logic of AI answers is based on, but in the answer to my question, an attempt was made to list the maximum number of probabilistic events, in which it was very difficult to identify the true cause of the problem I described.

Regards, Vladimir.

 
Andrey Dik #:

You are wrong. there are a lot of tasks with incomplete conditions around. this is the difference between humans and machines - we literally live in conditions with incomplete input data, in such conditions machines pass, and humans need to make a decision.

I didn't invent the boat problem, it's a very old problem.

I don't mind that there are a lot of "incomplete problems"!

The point is that this task is proposed as a task with FULL conditions, and then it turns out that there are implicit conditions, and the task is a task with incomplete conditions. If there was a clause "conditions are not complete" in the condition beforehand - the solution would look quite different. And the fact that the problem is old doesn't make it any less stupid.

For me it is much better to be clever in other problems, such as, say, another famous problem - "You are on the seashore. In front of you is a stone ten-ton granite block-parallelepiped 30 metres long and three metres high. There is only a sandy shore and the sea around. It is necessary to turn the block to any other face. How to do it with the help of improvised means?

 
MrBrooklin #:

It is very difficult to understand the logic behind the AI's answers, but in the answer to my question, an attempt was made to list the maximum number of probabilistic events in which it was very difficult to identify the true cause of the problem I described.

With respect, Vladimir.

It seems to me that in answering such questions as yours, it is easier for the AI to show its best side. The model was primarily trained on scientific articles, books and textbooks for engineers.

It's hard to separate how much of that answer is AI "intelligence" and how much is human, but you can positively expect the AI to be at its best in such questions.

Try asking it a task from a given question. If it "understands" the topic, it will be able to solve it. Here's a test:)
Reason: