Artificial Intelligence 2020 - is there progress? - page 36

 
Mihail Marchukajtes:

You are absolutely right. Machines are much cooler than we are in accuracy, calculation speed, storage capacity and a lot more, and yes in small tasks they are not passionate about choice (a side effect of intelligence), straightforward and their use in small tasks only benefits man. But here we were talking about intellectualization of machines which is acceptable but not with existing technology. Here, when the material created by the Chinese will receive a mass distribution, as well as methods of training similar physical structures, then we will come to the series "Wild West World" not before.....

The show is top-notch, by the way. I RECOMMEND!!!!!!

With all due respect to the achievements and discoveries of the Chinese, the understanding of the essence of intelligence comes from self-consciousness, which has little to do with technology. Modeling intelligence is more than training any NS, even the most contrived one. We only create individual functions of intelligence, but there is no integrity, because there is no concept. There are general definitions, hypotheses of operation, but no actual "blueprints". Consequently, we build disjointed models of its discrete phenomena from which we try to assemble something. I think the approach is wrong. We need to start with a blueprint of the WHOLE intellect as a whole.
 
Реter Konow:
With all due respect to the achievements and discoveries of the Chinese, understanding the essence of intelligence is a by-product of self-awareness, which has little to do with technology. Intelligence modelling is more than training any NS, even the most sophisticated one. We only create separate functions of intelligence, but there is no integrity, because there is no concept of it. There are some general definitions, hypotheses of work, but there are no actual "blueprints", hence we do not build anything but fragmentary models of its discrete phenomena from which we try to assemble something. I think the approach is wrong. We need to start with a blueprint of the WHOLE intelligence as a whole.
For me, the important thing would be 100% modelling of the processes going on in the biological cloud of neurons. Well, our generation is unlucky. We don't live in this era. But we will be pioneers and founders of it. :-)
 
Mihail Marchukajtes:

Here we go again for those who missed it. It's already 29 hits and I'm on top :-)

What is this place?

 
Evgeniy Zhdan:
Does Yandex's Alice count?

No, skills for Alice
can be written by anyone, no programming knowledge is necessary. There was (is there still?) even a competition for the best skill.

Как создать навык для Алисы с нуля — Академия Яндекса
Как создать навык для Алисы с нуля — Академия Яндекса
  • 2019.07.05
  • academy.yandex.ru
С помощью голосового помощника Яндекса уже можно узнавать погоду, строить маршруты и управлять умным домом. Но возможности Алисы можно еще расширить: например, создать навык для заказа еды или игру-квест по управлению государством. Навык может создать и опубликовать любой пользователь с помощью платформы Яндекс.Диалоги. Чтобы это сделать, нужно...
 

Well, that's... 53 hits already. You spoil me, colleagues.

I think it's an indicator of those who are really interested in the topic. Not that many in the general population to be honest :-)

 
On the subject of "Intelligence":

Intelligence is a system of multidimensional processing of information gathered into objects. (imho).

Consciousness reflects reality in two basic models - objective and subjective. Reflection is 'pseudo-objective', but consciousness tends to maintain clear boundaries of meaning and precise systems of measurement of things, and in contrast, in the subjective model of the world, consciousness generalises and evaluates everything approximately, mixing emotions and attitudes. Models of objective world systems can be said to be "encased" in a subjective shell, and in some cases there is nothing but the shell.

Modern programming does not offer methods of fast description of a set of objects with connections and regularities, and modern AI does not "know" anything and cannot judge the world. Its destiny is to select an invariant in a query - whether it is a person's face, a road sign or a phrase, to which it gives a predetermined response.

Dmitry Muromtsev (head of ITMO's International Laboratory for Intelligent Information Processing and Semantic Technologies and head of the IPM department) is correct in his approach to the question of creating conversational AI. "Ontological modelling" that the article talks about (link on front page) is indeed (imho) the key to the solution, but what does it mean? - It is a modelling of being.

What does it mean? How does Being (the infinite World around us which we learn since childhood by all available means and in all available variety) relate to artificial intelligence? Is it technically possible to describe the human (or superhuman) volume of knowledge and experience and put it into a database? And most importantly, why?

I will say that I agree with "route of thought" - after a stage of modeling a system of human knowledge we will pass to implementation of the program of multidimensional processing (which is AI), but the information should be collected in objects, which differ strikingly in format from OOP objects - much more difficult and richer, and inside are collected from "proto-blocks" - components: parameters, states, forms, events, processes and other... The proto-blocks need to be generalized and sorted to quickly assemble into templates and instances and, in passing, combined into complex systems, and further, into multiple classified hierarchies.

Dmitry Muromtsev makes classical mistakes of a technical expert - he chooses predetermined standards and methods of solution and does not fit his views with the philosophical concept (which is begging to be seen). He talks about the "languages of ontologies" applied in the industry, but does not ask the question ". Can the true nature of objects be described by them?". To what extent are "knowledge graphs" convenient and sufficient for "cognitive" AI processing? Most likely insufficient and unsuitable at all - new tools need to be created.

I am convinced that the bridge to AI is a new model of the Object, but this is only the beginning...
 
Modern AI works with knowledge in a book format - from tying up scraps of text, while future AI will have to work with parametric systems - actual models of real-world objects and their derivatives that interact and mutually include each other. "Bookish" knowledge format is fundamentally different from systemic, mathematical knowledge. The knowledge information machine will acquire an engine that will (1) "push" its mechanism from within, (2) retrieve, process and add new data in real time, (3) construct new object-systems.
 

The problem is primarily fundamental - it is the lack of definition of AI. Digging deeper, such problems usually arise if one is not logically disciplined: one relies on current knowledge, does not question it, and appeals to the authority of scientists. More often than not, scientists, specialists are polymaths with little analytical apparatus. They can talk a lot, write a lot, derive infinite length formulas, but they are unable to understand fundamental errors. For example, the Big Bang was confirmed in a basic way and that's all - all scientific brains now draw formulas under it. They allow distortions of spaces, put an equal sign between matter and space, curve and straight line, allow wormholes and other. Much can be said and written, and even logically. But, if there is a basic error, and there are no logicians among scientists, then the problem becomes protracted. That is, if AI will think like a human, we will not have to talk about any technological revolution - we will simply clone the average mind, which will consider Einstein a genius, and produce endless useless theories and hypotheses. The next revolutionary step would be to create an AI that thinks logically and has the infinite power of modern computers. Then there will be something that will not only talk, but will explain to us what philosophical directions to expect after transhumanism.

There are three types of mind: erudite, calculator, and logician. The polymath is Wasserman, the calculator is Perelman. The first cannot calculate, the second does not know what the dots on the flag of Brazil mean. The first says that the topology of the universe is a dodecahedron or a flat torus. The second runs off to deduce formulas. And only the logician separates space from matter, defines properties of both and throws out all dodecahedrons from the thought process about physics, as unnecessary, and goes to work further. And that's the analogy I gave of "thinkers", real RAS physicists really don't see the difference between space and matter in space, hence allow for space warping, wormholes, the finiteness or closedness of the universe and so on. And the more serious or enthusiastic the scientist's face, the less logically disciplined he/she is and allows himself/herself to "allow".

Analytical efficiency is allowed by combining erudition, calculation and logic. First of all, one must define the concepts.

As far as I remember, on the Internet, intelligence is the ability to think, intelligence, a feature of the psyche, processing of different information, something else.

First of all, it is necessary to identify the main feature of intelligence - it is the ability to work without the use of all the necessary sensors and measuring instruments. For example, to determine temperature range of water by photo (photo of kettle with boiling water). To have a sensor and measure the temperature is simply to obtain data (knowledge). Not having a sensor and measuring temperature is using intelligence.

Thus intelligence is the ability to process information without the use of special knowledge and measuring instruments.

The second feature of intelligence is finding the shortest path to a goal. That is, if you have a sensor, why waste computing power on an analytical apparatus - just plug it in and measure. Thus, the second feature of intelligence is the use of "other people's" labour to solve a problem. Vasya has been studying all year long, absorbing knowledge, he sits in the exam and recalls the answer to the last question, with his head held high above the ceiling. Petya has been fooling around all year, and he cheated off the first one while he was remembering. Both solved the problem almost perfectly. This knowledge was of no use to them in life, but Petya saved a lot of time to implement his goals.

The third characteristic of intelligence is independence from the goal. Unlike humans, which are subject to basic instincts and needs, the intellect is only a tool, not an autonomous unit. It can be made independent by adding the goal of being. Then the entire work of the intellect becomes independent, because it pursues the goal of being "switched on" at all times, or in simple terms, "alive". Hence the danger problem of AI - if someone creates an analytic-logical apparatus and sets it a basic security module whose goal is to exist all the time, then such an AI will look for a way to achieve the goal and classify the danger in the form of humans - the primary controlling link in the process.
But, this is not mandatory for the intelligence function. So voice recognition is also AI, a small part of it.

Hence, for an AI to answer the question why cows don't fly, it must at least distinguish between logical answers: complete - "because cows physiologically lack the organs to fly", incomplete - "because a cow is not a bird", as well as standalone - "so she doesn't need it", humorous - "Darwin forbade it", and so on. And depending on the answer, classification of the nature of the answer is inevitable, and this is already a sign of personality.

Fundamentally there are two ways to create AI:

1) Continuous learning - building up the knowledge database with subsequent correction of information in memory.
2) Logical delta: proto-quantum sweep of the universe, from field matter and particles to the complex structure of molecules, matter, biology and sociology - breaking them down into one big table. (I saw an article about this somewhere, but can't remember where), and throw that whole table into a neuron. And, the more processing power there is, the faster the neuron will independently learn the world and everything that humanity has not yet reached, predicting models and technologies to solve any problem, whether it is a formula for a vaccine from a coronavirus, weather forecasting, to the development of gravitational propulsion systems. In other words, there will be nothing to learn, AI will solve any problems within the limits of physical laws, the main thing is to formulate them correctly in front of it.

Now the development is going by the first way, sluggish, for the second variant, if there is somewhere, will obviously not be advertised.

 
Ivan Butko:

...

First of all - thank you very much for your extended and considered opinion - it contains a lot of interesting and original views and is one of the best posts in the thread.

Secondly. You are obviously a humanitarian and try to look at the AI problem from all possible angles; the moral and existential ones are very nice, but the technical ones are inaccurate.

And so:

  1. The definition of AI is there - it is a system of multidimensional processing of information gathered into Objects. I stress - exactlyObjects. Why is this important? - Because everything the human mind deals with has one format - Object. How exactly does a human being process these objects is the second question, while many tools are available:classification, computing, modelling, processing of values, parameters, properties, forecasting, generalization, extrapolation and interpolation, construction of hierarchical structures and logical connections and many others. In other words, it turns out that there is no "magic" in the activity of the intellect from the technical point of view - it is just the work of a complex functional with environmental objects, which it "reflects" in itself.

2. You say that if we copy the average mind we won't make a technological revolution - that's not true. The technological revolution is about the complete prosthetics of human labour - both physical and mental- and what follows - and the consequences for the whole world will be dramatic - is a question from another area. What matters is that AI brings about a world revolution in any case.

3. From a technical point of view - replicating an ordinary, average intelligence is far more difficult than creating an incredibly powerful computational intelligent machine devoid of experience and feeling. In addition to intelligence, the average person has a complex psyche, whose world is incomprehensible to us and hence not subject to reproduction. It is impossible to accidentally add something to an AI that we cannot understand. It is possible to write functionality, but not the spiritual world. Plus, it will interfere with the machine's ability to function effectively and build a material paradise for everyday people). The psyche will reduce AI performance, reduce efficiency, increase problem-solving time and error in results, and most importantly, it will not pay off commercially - so there is no need to recreate it.)

4. there is no practical sense in creating an AI that will be "independent" from the goal (perhaps it will not be able to function either) - it is necessary to create a machine intended to solve a wide range of problems, not the personality of an unemployed individual in mid-life crisis, after a divorce, seeking solace in Buddhism, to then pile on it the solution of world problems. The aim of creating AI is to automate the solution to all possible problems within the rational circle: industrial, domestic, scientific and perhaps even political. Such AI will undoubtedly lead to an industrial and industrial revolution. I stress: AI will forever(unconditionally) remain dependent on human goals and will exist only in the role of a mega-powerful "calculator". One's own goal-setting, self-awareness and spiritual quest will NEVER be reproduced in a machine, as humans are incapable of understanding and algorithmising them. The rest of the opinions on this subject are mere fantasies of philistines.

5. The question "why cows don't fly" is the test for modern AI. From the next generation, it must "know" objects, phenomena and laws of the physical world and "know how" to navigate them. It will be able to humour and speculate about them even later, unfortunately. In this case, humor and "demogoguery" of AI about the world will have to be based on calculations and calculations, rather than prepared texts. That is, an AI does not need to be "taught" in books and articles, its work must be algorithmic at the level of parametric systematization and formulation of calculations, while the background of the answer (humorous, philistine or scientific) must be obtained as a result of processing of meaning in the context of the situation or dialogue.


The conclusion from all of the above is that AI is developable and buildable with the right approach and limited objectives associated with it. It is possible to create a conversational AI with functionality of sense analysis and calculation of results based on processing of objects as parametric systems, but it is long to explain.)))

 
Maxim Fedorov, Vice President in Artificial Intelligence and Mathematical Modelling at Skoltech, on the development of AI from ethical, legal and technological perspectives.

"At a high level in various committees the problems of strong AI are being discussed, and there is not and will not be any for the next 50-100 years (and maybe not at all). The problem is that by discussing dangers that don't exist and won't exist in the near future, we are missing the real threats. It is important to understand what AI is and develop a clear set of ethics and rules. If you follow it, you get good, if you don't follow it, you get harm.
Reason: