Machine learning in trading: theory, models, practice and algo-trading - page 1838

 
Mihail Marchukajtes:

I'm duplicating the video here, just in case anyone would be interested. And I'm a local in this branch, so what the hell... Maybe some magician there will want to argue, etc.. :-)

https://youtu.be/TlNk3fKkUxo

Strong and weak artificial intelligence

 
Mihail Marchukajtes:

I'm duplicating the video here. I don't know if anyone would be interested. And I am local in this branch, so what the hell... Maybe some magician there will want to argue, etc.. :-)

https://youtu.be/TlNk3fKkUxo

Misha promised a grail, but fell into populism 😄
 
Mihail Marchukajtes:

I'm duplicating the video here, just in case anyone would be interested. And I'm a local in this branch, so what the hell... Maybe some magician there will want to argue, etc.. :-)

https://youtu.be/TlNk3fKkUxo

What is there to argue with?

Just the opinion of an average person who thinks that he knows what the IT giants do in this field ))))

 
Mihail Marchukajtes:

I'm duplicating the video here, just in case anyone would be interested. And I'm a local in this branch, so what the hell... Maybe some magician there will want to argue, etc.. :-)

https://youtu.be/TlNk3fKkUxo

1) Prof. Seveliev says that not 100 thousand connections per day per neuron are destroyed, but that there are only about 100 thousand of them, and on average 3 connections per day are created and broken.
2) A system with re-learning is quite suitable for this. For example, we feed a new batch of retraining data (contradicting previously memorized information) and the coefficient of connection between some neurons recalculated and became zero. This can be quite analogous to physical breaking of connection in a real neuron. And another neuron recalculated from 0 to some value >0 - this is an analogue of creating a new connection.
 
Igor Makanu:

There's a good article on the hubra

How to understand that a neural network will solve your problem. A pragmatic guide

theoretical questions came up, is it possible to train NS:

1. as a random sequence generator - analog of rand() function

2. as a function transforming ulong into datetime, i.e. at the input we give a number of ulong at the output we get year/month/day/hour/minute (with a specified accuracy)

1) Unlikely. The Ministry of Defense can only memorize and, if necessary, summarize information.

2) Quite right - just generate training data for 10000 years ahead and train the model. But there is a problem - besides 1 day correction every 4 years (leap years), there is 1 day correction every 100-something years, to fit astronomical position of the planet. The difference with the Julian calendar for 13 days is due to the fact that for some time they did not know that these adjustments are necessary. In general, there will be many corrections over 10,000 years.

 
Maxim Dmitrievsky:
Misha promised a grail, but fell into populism 😄
Well I didn't promise the grail, so..... the usual explanation and that's not it.... It's just a sour note, that's why it puked :-)
 
elibrarius:
1) Prof. Seveliev says that it is not 100 thousand connections per day that each neuron breaks, but that there are only about 100 thousand of them each, and an average of 3 connections per day are created and broken.
2) A system with re-learning is quite suitable for this. For example, we feed a new batch of retraining data (contradicting previously memorized information) and the coefficient of connection between some neurons recalculated and became zero. This could well be considered an analogue of physical breaking of connection in a real neuron. And another neuron recalculated from 0 to some value >0 - this is an analogue of creating a new connection.

1. Well, I misspoke here, I'm not arguing. I was talking in general about the number of connections between neurons, not one....

2. there is no complete break here. Still zero is transmitted. Although some approximation to this effect is present. Well done!!!!

 
And yes, I hope ALL noticed that I do not look exactly like Reshetov Yura? Or rather not at all, if in what....
 
Igor Makanu:
as a random sequence generator - analog of rand() function

If you put rand() on the input, most likely the "brains" are not enough. I was running lstm on random data for a while and didn't notice anything like that. And if you learn the steps - number systems conversion, algorithm, then it should work.

In general, it is interesting to check on a very powerful iron.

 
Mihail Marchukajtes:

I'm duplicating the video here, just in case anyone would be interested. And I am local in this branch, so what the hell... Maybe some magician there will want to argue, etc.. :-)

https://youtu.be/TlNk3fKkUxo

Ahahahaha )))) GO!!!

Reason: