Artificial Intelligence 2020 - is there progress? - page 42

 
Aleksei Stepanenko:
I get it. It's complicated. It's often hard for a person to make sense of themselves. How can one create a mind?
You can't replicate it artificially. Only within certain, rigid boundaries.
 
Реter Konow:
So, in order to get a correct logical conclusion, you have to add ALL the details to every fact?

1. Some dead people have no heads.
2. Dead people, biologically, are mammals.

3. Some dead mammals don't have heads.

And what is the value of such a conclusion? The reasoning about dead mammals seems absurd. This logic will become even more absurd as the private properties of the specimen are further accumulated and transferred with all the details to its category niche, which, is not adapted to accept all this informational "junk".

Not all, but all the necessary ones, of which there are enough.

 
Dmitry Fedoseev:

Not all, but all the necessary ones, of which there are enough.

The mathematical engine is unable to distinguish the important/significant properties of a category of objects from the unimportant particularities without human experience, and hence cannot properly generalise the data by filling the category with abstractions rather than details (tails in the mammalian niche).

If you synthesise an abstraction with conclusions based on particular phenomena/facts from many objects, there is no guarantee that this abstraction is not absurd and if new objects are generated from an absurd abstraction, they will be even more absurd.

Human experience saves us from the progressive schizophrenia of mathematical conclusions about the world).

 
Dmitry Fedoseev:

Not all, but all necessary, which is enough.

necessary enough and necessary not enough from logic))) Dalet sophistry)

 
Реter Konow:
The mathematical engine is unable to distinguish important/input properties of a category of objects from unimportant particularities without human experience, and therefore cannot properly generalise data by filling the category with abstractions rather than details (tails in the mammalian niche).

If you synthesise an abstraction with conclusions based on particular phenomena/facts from many objects, there is no guarantee that this abstraction is not absurd and if new objects are produced from an absurd abstraction, they will be even more ridiculous.

Human experience saves us from the progressive schizophrenia of mathematical conclusions about the world).

The development of neural networks and MO is following the likeness of found rules algorithms and whatever else living organisms do. This has always amazed me. instead of making up a similarity. But it works for some reason in certain tasks. The transition of quantity into quality. But it's not about brain and intelligence))) It's at the cellular level for now.

 
Valeriy Yastremskiy:

The development of neural networks and MO is following the similarity of the found rules of algorithms and whatever else living organisms. This has always amazed me. instead of inventing to make a similarity. But it works for some reason in certain tasks. The transition of quantity into quality. But it's not about brain and intelligence))) It's at the cellular level for now.

Earlier in the thread I said that neural networks are not enough to create an adequately thinking AI (they are enough for primitive recognition, primitive prediction and primitive classification).

Thinking is a process of another mechanism that we know little about. Logic is invariably present in thinking, but works in multiple ways - empirical experience is often at odds with logic and they "fight". Experience filters out unconcerned logical conclusions, selecting the right ones from a stream of straightforward, logically based nonsense.

All in all - the field for research is vast.

 
Реter Konow:

Earlier in the thread I said that neural networks are not enough to create an adequately thinking AI (they are enough for primitive recognition, primitive prediction and primitive classification).

Thinking is a process of another mechanism that we know little about. Logic is invariably present in thinking, but works in multiple ways - empirical experience is often at odds with logic and they "fight". Experience filters out unconcerned logical conclusions, selecting the right ones from a stream of straightforward, logically based nonsense.

All in all - the field for research is enormous.

That is why talk and laws about AI ethics are premature and are all but a means of increasing demand and making money)

 
Valeriy Yastremskiy:

This is why talk and laws about AI ethics are premature and are all just a means of increasing demand and making money).

Absolutely. 100%.

That said, I'm still confident of a near leap in AI technology. After all, no matter how complex the thinking mechanism is, it can still be researched, understood, systematised and reproduced.
 
Geist:
The lab is MetaTrader5 with the MQL5 language. All the tools you need are there.
You will not be able to find experienced programmers and investors who will spend their time and money on any research of your father.
So your father (or you) will have to master MQL5 to prove something to himself and/or the world. Or at least in order to make a real practical first step to attract the same investors and experienced programmers.
Your words imply a conclusion that your father has a very superficial notion of AI. I really wish I were wrong.
You have a beautiful, legalistic style of speech. )))
And also, amazing powers of foresight and long-range planning.
 
Реter Konow:
You have a beautiful, legalistic style of speech. )))
Also, amazing powers of foresight and long-range planning.
I couldn't agree more, although Petya hasn't created anything apart from Rainbow, but at least something...
Reason: