Artificial neural networks. - page 9

 
tol64:
But I also brought an argument against it. The camera has many times surpassed the capabilities of the eye, if it is also a telescope. ))

Doubtful statement, all astronomy was worked out before the discovery of the telescope by those with normal eyes, in this business the main thing is not the transmission of the image but its interpretation, although I agree the man with 10 metre eyes is a scary sight :)

By the way, the computer has also surpassed a human being in many respects although it was not created in the likeness of the brain.

 
IgorM:

...

.... And the great thing is that by filtering out unnecessary information and creating a mathematical model of the market, you can build an effective TS without NS.

And why is it sad? )) On the contrary. If TS is found and it is quite effective without NS, then we should be glad. ))
 
Urain:

Doubtful statement, all astronomy was worked out before the discovery of the telescope by the normal eye, the main thing in this business is not the transmission of the image but its interpretation, although I agree the man with 10 meter eyes is a creepy sight :)

By the way, the computer has also surpassed man in many respects, although it was not created in the likeness of the brain.

But the current computer model is already "creaking at the seams". It is difficult and even almost impossible to develop further. That is why Kwabena et al are trying to implement a model similar to the brain.

And if a man is enlarged in proportion to 10 m telescope eyes, he will not look creepy any more. Or if you make the telescopes smaller to the current eye size, of course. )))

 
tol64: And why is that sad? )) On the contrary. If the TS is found and quite effective without NS, then one should be happy. ))

Sad about the wasted time - you could not have engaged in NS, but could have engaged directly in data analysis and filtering

ZS: I didn't mean to, but still I will say how I see what all beginners look for in NS, at least figuratively: if not a beautiful name"neural networks", but for example "exponential regression mathematical fitting", then there would be less interest and expectations to such mathematical tool, and thanks to the sound name people expect miracle from "clever logarithmic ruler".

 
tol64:

But the current computer model is already 'creaking at the seams'. It is difficult and even almost impossible to develop further. That is why the same Kwabena et al are trying to realise a brain-like model.

And if a man is enlarged in proportion to 10 m telescope eyes, he will not look creepy any more. Or if you make the telescopes smaller to the current eye size, of course. )))

The computer math itself implemented methods from 300 years ago, which is why it is a deadlock.

Mathematics practically doesn't develop parallel methods is the crux of the problem.

The main thing that is worth borrowing is the parallelism of methods and NS is a step forward in this respect, but the copying of NS work in accordance with natural NS is a step backward.

 
tol64:

It's great that you know such researchers personally. Do you happen to know Henry Markram by any chance? His prediction in 2009 was 10 years. :) I wonder where he stands now.

Henry Markram is building a brain in a supercomputer.

No, not personally. But I'm familiar with his blue brain project. Markram believes that we will only be able to understand and copy the way our brains work if we accurately model how a neuron works (ion channels, diff equations describing ion movement and electrical impulse propagation throughout the neuron body, delays, etc.). In 2009, IBM announced to the world that they had modelled a cat brain. Markram was quite embittered(http://spectrum.ieee.org/tech-talk/semiconductors/devices/blue-brain-project-leader-angry-about-cat-brain), claiming that the IBM researchers had used point-coupled neurons, i.e. simple mathematical models (such as neurons of classical networks with their sum of weighted inputs and non-linear activation function). Another interesting scientist in this field is Penrose. So he claims that even knowing all the details of ion exchanges, chemical reactions and impulse propagation through the neuron body is not enough to understand and explain how the brain works. He argues that it is only possible given the quantum reactions within neurons (Hameroff-Penrose theory). Read here https://en.wikipedia.org/wiki/Quantum_mind. Penrose also claims that through these quantum reactions, our brain is able to 'go' to other dimensions and draw knowledge from there. Look up his lectures (Roger Penrose) on youtube. They are wildly interesting.

I am not familiar with Markram's predictions. Until the 1990s, neurobiologists believed that information between neurons exchanged as a number of impulses that could be described by a number, which is where the classical networks came from. In the mid-'90s, scientists found that the timing of individual impulses was more important than their number over a period of time. Markram and other scientists discovered a new rule that changes synaptic weights, namely STDP. Many neurobiologists over the last 10 years have begun to build so-called spike networks, in which information is distributed as pulses (like a 0/1 binary signal), and the weights vary by STDP. These neuroscientists began to argue that the reason why classical networks did not lead to robots was because they incorrectly described information (numbers instead of pulses), the neuron (sum of weighted inputs instead of diff equations), and changing weights (Hebb's rule instead of STDP). But unfortunately these new spiking networks have not yet surpassed classical networks in capability. Also, they require much more computer power. So there is not much progress in neurobiology so far, and we should not expect new networks capable of revealing patterns.

Cat Fight Brews Over Cat Brain
  • spectrum.ieee.org
Last week, IBM announced that they had simulated a brain with the number of neurons and synapses present in a cat's brain. In February 2008, the National Academy of Engineering issued a grand challenge to reverse engineer the human brain, sweetening a pot neuroscientists had already been stirring for a long time. There are as many theories of...
 
papaklass:


That is, if you created a model that described the changes in weights during market movements, the results might be different, not so depressing. Have you done this kind of research?

Do this at your leisure.

This would require a second grid, which would look for patterns of changes in the weights of the first grid in response to market movements. Then you would need a third grid, which would also look for dependencies in the second grid with changes in the first and the market. Then a fourth ...

Suppose that we have created a model that describes changes of weights in the market. What do we do with it next?

 
Reshetov:

Do it at your leisure.

This will require a second grid, which will look for a pattern of changes in the weights of the first grid, depending on changes in the market. Then you need a third grid, which will also look for dependencies in the second grid when the first grid and the market change. Then a fourth ...



And here I was taking money out of the market for 3 years, not knowing that after the first grid would be needed the second ...

For me, a person with an analytical mind, it is dangerous to read such threads, I stop earning, I'm not thinking about it....

 
St.Vitaliy:

And here I was taking money out of the market for 3 years not knowing that after the first network, I would need a second one...

Well, with your face, you don't need to take money from the market, you can just print it.
 
papaklass:

That is, if you created a model that described the changes in weights during market movements, the results might be different, not so depressing. Have you done this kind of research?

No, I haven't. I don't think anything good will come out of it. Here are my thoughts. Suppose we use polynomial regression instead of a network, which is another way of universal non-linear modelling. So our task is to fit the polynomial

y = a0 + a1*x + a2*x^2 + ...

into our data y(x) by finding coefficients a0, a1, a2,... that reduce the error of our polynomial model. We know that our polynomial model is only good on the data on which the fit was performed. Essentially you propose to make the model coefficients a0, a1, a2,... (the same weights of the network) be made dependent on input data, to make the model more robust on unlearned data, i.e. make a1(x), a2(x),... Ok. We describe each coefficient by a different polynomial:

a1 = b0 + b1*x + b2*x^2 +...

a2 = c0 + c1*x + c2*x^2 +...

...

Substitute these coefficients into our first model and what do we get? The same polynomial but of higher order that can more accurately describe the training data but performs poorly on the new data. It is exactly the same with networks. One network trains another, which trains a third and so on, it is nothing but one big network. We won't get any more accurate behaviour on new data. But if anyone wants to test this idea, let us know the results.

Reason: