Representation of an object in programming. - page 16

 
Реter Konow #:

The goal is to run programme autosynthesis based on a common Object model.

This is probably what is called genetic programming. You can't do without an explicit language description in the form of a BNF grammar or a tree (which is basically the same thing) there, either.

 
Aleksey Nikolayev #:

This is probably what is called genetic programming. There, too, you can't do without a clear language description in the form of a BNF grammar or tree (which is pretty much the same thing).

Today I'll try to describe the steps of synthesizing a simple label from a pixel "proto-environment" and a step-by-step scenario of its sequential complication.

Augmented:

The goal is to identify a pattern of complication of software objects for its subsequent automation.

 

Part 4.2

This part of the concept deals with the so called"Difficulty Pattern" (aka"DevelopmentalPattern"), a scheme supposedly hidden in the subconscious and serving as an "instruction" for assembling things. The formulation and implementation is likely to unlock a new algorithmic "Grail", in the form of automatic program synthesis and the next generation AI engine. The game is worth the candle.

As is tradition, let's announce the original thesis:

  • All Objects exist only in Environments. The life of an Object "by itself" only makes sense as a "museum piece". Thinking about the Object, we have to take into account the Environment of its"birth" and"life activity", i.e. the object environment, in which it interacts and fulfils its program (or function).
  • Development is possible only within the Environment. No Object develops and evolves in itself and "extracted" from the Environment loses interactivity because its Event Model ceases to receive signals and its functions (except looped ones) stop.
  • The object is "born" by structuring. The initial "material" is either (1) a chaotic set of proto-particles in a proto-environment, or(2) a hierarchically classified "menu " of parts of previously assembled functional systems that at this stage will be organised into a new"Meta-structure" of the next level of complexity. This depends on the Object being created and which Environment is available.
  • Thecomplexity of the generations of Objects being created is staggered. In a simple Environment of primitive objects, it is not possible to build a super-complex System by skipping over the stages of engineering Objects of intermediate complexity. This rule is dictated by the simple fact that Objects are built from common "parts" and the gulf in complexity from the simplest to the most complex System will not allow "stepping over" the stages of assembling "parts" of intermediate complexity.
  • The efficiency of structuring the Object depends on the efficiency of organisation of the "parent" Environment. In other words, it is less efficient to structure the Objects by copying the parts of the working Systems than to assemble them out of sorted content of categories built by the classification of the same Systems. It means that before we start to structure something out of something, it should be dismantled and sorted to make the new one easier to assemble.
  • The first key mechanism in structuring of Objects is "Inheritance " - i.e. the method of full or partial inheritance of parameters or functions of one Object by another. At that, much more effective for construction of new Object (if it is not a full copy) is use of "blanks" (templates) in which functions/parameters/values are further redefined for each specific Object. Inheritance should be set to "flow" and be abstracted from specific objects. Inheritance should be based on some classification Model, replenished by new templates with construction of each new Object. In this case, such a Model will serve as an ideal base for structuring, being in fact an ideal development environment. The Inheritance mechanism implements one of the principles of evolutionary development.
  • The second key mechanism in structuring of Objects is 'Tuning' - a method of selecting values of parameters of the Object suitable for its task. It can be effective to use the principles of "genetic optimisation". It should be remembered that effectiveness of any parameter values at the Object is determined by its Environment, and therefore, in the process of testing, it should give feedback.
  • The third key mechanism in structuring of Objects is "Selection" - a method of selecting an instance of an Object among its copies-variants (different in functionality, parameters or values) during testing for solving the target problem of its existence. In this process,the Environment must also provide feedback . Selection mechanisms also implement one of the principles of evolutionary development .


Next (in the next part) we'll talk about the Label but keep in mind the above theses as they unambiguously hint at some answers to the questions of "autosynthesis" of software Objects. It means that the Being of Objects is locked by strict rules of birth,existence and development of Systems and Means, and we can't create "just anything" hoping for the result. It is already obvious that possible methods of realization of programmatic autosynthesis are limited.

In thenext part, we will consider the "birth environment" and stages of "codo-enrichment" of the Label during transformation of the unstructured pixel set into an interactive software tool.

 

People have long been interested in popular science questions about the threat of so-called "artificial intelligence":

  • Can robots invent and build other robots themselves?
  • Can a computer become self-conscious and, if so, how will it then relate to humans?
  • Is there any chance for the "weak" human mind to resist the computing power of the artificial neuronet which beats GMs as schoolchildren?
  • What fate will the "Machines" prepare for us?
  • And so on...

The science-fiction writers, for the most part, were inclined to dire prognoses and drew chilling stories about the victory of soulless computing and mechanical forces over the discouraged and overwhelmed "sleuths". While the wave of popularity of machine uprising theories gathered momentum, scientists were divided in their opinions. Some smiling sceptically called it scaremongering, others seriously proclaimed that AI would be our last invention. Some believed that we and computers will live in peace, others (such as the very impressionable marketing entrepreneurs who dream of going to Mars) were so carried away that they began to cry out like prophets to think about the inevitable end, addressing the masses via the internet and television. At the same time, the IT companies continued to develop actively and unconcealedly towards the "ominous" abyss of the so-called "technological singularity" beyond which our life will change so much that we will degenerate into the unknowable.Because of excess of appeared theories, opinions and technologies for many people who wish to understand it became difficult to understand who and what to believe, however the answer, in my opinion, should be sought in software programmers, because according to "unknown-about-here-foreknown" scenario, "victorious procession of machines through corpses mountain" must begin from writing of some special code which then will be loaded in quantum hardware or supercomputer and inside this code will realize itself. It is logical to assume that the emergence of digital consciousness depends on certain genius programmers spending their working days behind the dusty desks of the offices of "evil" corporations, and they should know better than anyone else whether there are reasons to be afraid.

Realizing that many fears of AI were created by popularizers to heat up the Market and stimulate sales of thematic products - games, books, movies (and... brain chips), still I would like to understand HOW code may actually threaten humanity and if it is possible to write it in principle?

The answer, even in general terms, is very difficult. First, one has to discount fiction and formulate the point:

  • In a trend of software development and increasing complexity, should a program arise that will be able to invent other programs or mechanisms by itself?
  • Will that program be able to create another program that is more complex than it is?
  • Can a "complication algorithm" for programs/mechanisms be formulated and written without which it is impossible to create such a program?

Let's not be cheeky and answer first, let's ask Evolution. Does it not possess the complication algorithm? Has it not been using it for hundreds of millions of years? Isn't our eco-system proof of Evolution's possession of the as yet unreachableGrail of Life?

Now, let us look at human creations. Are we not constantly complicating our technology? Are we not creating more complex, diverse and multifunctional devices? How do we know how to complicate and improve anything? Don't we have the same complication algorithm that Evolution has? Didn't Evolution "put it in us"? So maybe the Evolutionary complication mechanism and the one we use to make more complicated phones/computers and stools is one and the same?

Based on this logic, we have an a priori complication algorithm, but we either don't know it or can't articulate it clearly.


Afterword:

I decided to devote this part to explaining the meaning of my research. I will continue the step-by-step analysis in the next part.


 

A good philosophical topic, unfortunately I cannot answer in detail now, but in brief:

Artificial consciousness implies (at least on the level of theoretical reasoning) the possibility of "artificial" will as well, and it is obvious that when we get to the point of creating full-fledged IC to endow new robots with it, we will just get rid of the will or make them a will that will be aimed exclusively at serving, and we will get such an intelligent executiveautistic, not a full-fledged psychic autonomous personality like in Pelevin's last text (if we disregard the obvious references to the deep people), so a rebellion of Skynet terminator-like machines simply won't happen.

An alternative hypothesis is that the development of will and autonomy does inevitably occur as the system (IS) becomes more complex, and then scenarios like in Detroit: Become Human, when androids even surpass humans themselves in humanity, or as in Cyberpunk 2077, in the storyline with intelligent machines in Delamain Taxi, in which case there will be either a need for artificial containment of smart machines' self-development, or an ethical problem of inclusion and recognition of android rights, but in fact the ethical problem arises at the IP creation stage: how acceptable is it to create a being who will probably suffer from awareness of being locked in an iron prison of a production facility? - However, the same problem exists in the birth of biological human beings today, it is just that no one asks children if they want to live in this world.

To the question of self-complexification of systems: apparently some kind of non-Turing automata model is needed to adequately explain emergence and self-development of psyche, without central processor in general, like memcomputing, though of course Turing completeness implies that one can emulate absolutely any environment with enough powerful machine, including why not emulate human NS starting from embryo with full environment simulation, but this probably is not very effective way.

 
transcendreamer #:

A good philosophical topic, unfortunately I cannot answer in detail now, but in brief:

Artificial consciousness implies (at least on the level of theoretical reasoning) the possibility of "artificial" will as well, and it is obvious that when we get to the point of creating full-fledged IC to endow new robots with it, we will just get rid of the will or make them a will that will be aimed exclusively at serving, and we will get such an intelligent executiveautistic, not a full-fledged psychic autonomous personality like in Pelevin's last text (if we disregard the obvious references to the deep people), so a rebellion of Skynet terminator-like machines simply won't happen.

An alternative hypothesis is that the development of will and autonomy does inevitably occur as the system (IS) becomes more complex, and then scenarios like in Detroit: Become Human, when androids even surpass humans themselves in humanity, or as in Cyberpunk 2077, in the storyline with intelligent machines in Delamain Taxi, in which case there will be either a need for artificial containment of smart machines' self-development, or an ethical problem of inclusion and recognition of android rights, but in fact the ethical problem arises at the IP creation stage: how acceptable is it to create a being who will probably suffer from awareness of being locked in an iron prison of a production facility? - However, the same problem exists in the birth of biological human beings today, it is just that no one asks children if they want to live in this world.

To a question of self-complexity of systems: apparently some kind of non-Turing automata model is needed to adequately explain origin and self-development of psyche, without central processor in general, like memcomputing, though of course the Turingian completeness supposes that it is possible to emulate absolutely any environment by a powerful enough machine, including why not to emulate human NS starting from embryo with full environment simulation, but it probably is not very effective way.

I think it's better to start with a simple system and move towards complexity, analysing each step. So, I decided to take Label as a base and see how it evolves into more and more complex object. To analyze the code, which we add to it and check if there is any scheme, repeating pattern in our actions.

The description of the process of conscious complication must be accompanied by programmatic and philosophical notions to generalize and look for rules which we ourselves adhere to. Perhaps we will get an understanding of what code in theory could perform similar actions.

 

We must first answer the question of what consciousness is. So far it is not very good, there is even such a term in modern philosophy - "the difficult problem of consciousness".

In my view, if there is any way to solve this problem, it will most likely be found in the way of Wittgenstein's philosophy of everyday language. So I continue to insist on a constructive formalization of language. Essentially, we need to do for the language of human communication with a computer roughly the same thing that was done for the language of communication between humans through the invention of the lobban or ifcuil.

 
Aleksey Nikolayev #:

We must first answer the question of what consciousness is. So far it is not very good, there is even such a term in modern philosophy - "the difficult problem of consciousness".

In my view, if there is any way to solve this problem, it will most likely be found in the way of Wittgenstein's philosophy of everyday language. So I continue to insist on a constructive formalization of language. In essence, we need to do for the language of human communication with the computer roughly the same thing that was done for the language of communication between humans through the invention of the loban or ifcuil.

72 cases, 24 new special cases, non-linear writing system, matrix grammar, morphosyntax, boustrophedon, and special phonetics - that is what is needed for the coolest trading sects (so that the Chekists and Freemasons could not steal the Grail).

 
Aleksey Nikolayev #:

We must first answer the question of what consciousness is. So far it is not very good, there is even such a term in modern philosophy - "the difficult problem of consciousness".

In my view, if there is any way to solve this problem, it will most likely be found in the way of Wittgenstein's philosophy of everyday language. So I continue to insist on a constructive formalization of language. In essence, one should do for the language of human communication with a computer about the same thing that was done for the language of communication between humans through the invention of the loban or ifcuil.

I don't agree with this view. To break it down, Consciousness is a broken, thrice-twisted, littered with a thousand tons of emotional junk, barely functioning and corroded Object 'processor'. We just need to get the System processing and complication mechanism out of it, and leave the rest to the Thinkers and Psychiatrists).

 
Реter Konow #:

I don't agree with this view. To break it down, Consciousness is a broken, thrice-twisted, littered with a thousand tons of emotional junk, barely functioning and corroded Object 'processor'. We just need to get the System processing and complication mechanism out of it, and leave the rest to the Thinkers and Psychiatrists).

Sounds like a suggestion to get its wetness out of the water)

Reason: