Machine learning in trading: theory, models, practice and algo-trading - page 601

 
Vladimir Perervenko:

This is a link to Habr. The library link is https://keras.rstudio.com/index.html.

Read primary sources.

Good luck

Yes, I see -Keras is a high-level neural networks API developed with a focus on enabling fast experimentation. R? as expected, the interface.

Thanks.

The sources are actually here -https://keras.io/ and on GitHub -https://github.com/keras-team/keras

Keras Documentation
Keras Documentation
  • keras.io
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. Use Keras if you need a deep learning library that: Allows...
 
Vizard_:

In the first half of 2016 the world heard about a lot of developments in the field of neural networks - their algorithms were demonstrated by
Google (AlphaGo game network), Microsoft (a number of services for image identification), startups MSQRD, Prisma and others...
Few people know about it, but others were going on in parallel. The best minds on the planet came together to create a unique system,
which could already be called AI. The final product was implemented by "God's programmers", working easily with any amount of data,
on any hardware, and even on a celled sheet of paper. In short - Mishan, catch! But shh, quietly...)))


Come on, I hope it's not a malicious virus... I don't know if you put a poop in it. I once knew a user nicknamed Klot. He used to say about himself that he could "program the hell out of the bald man" and was really good at programming....

I even couldn't look it through, because I've got the padded sheets from 2003... I guess it was not my destiny :-(

 

It's a "gap that's always with you" kind of thing.

 
Mihail Marchukajtes:

I couldn't even look at it, because I have the padded sheets from 2003...

It's time to rock.

excel 03 is hardcore.

 
Mihail Marchukajtes:

Come on, I hope it's not a malicious virus... I don't know if you put a poop in it. I once knew a user nicknamed Klot. He used to say that he could "Program the hell out of the bald man" and was really good at programming....

I even couldn't look it through, because I've got the padded sheets from 2003... I guess it wasn't meant to be :-(


google shits, no?

but there's some kind of a homemade perseptron :)

 
toxic:

Sometimes I have a suspicion that you are to blame for Reshetov's death, sorry for my outburst, I couldn't help it.


Dibs on.... Of course I tossed out some variants of my ideas, but I think that at most 10% of them were approved by him and I exaggerated this probably. The fact that in the MOE there are two areas of professionals are developers (programmers) and engineers (users). Guess which category I'm in????

 

So I'm thinking to take java training and move on. I feel that I do not understand a number of key points in the peculiarities of the language and they offered me to learn them for 150 thousand for a year at the university mila.ru..... That's the way it is. And he stopped at one of the final stages is the selection of predictors. Calculation of their importance and some selection. Because in the code two pieces ordered. But believe a specialist in the very process of learning, analysis of the resulting model and its selection (what are valuable engineers). I can give an assessment of the following nature.

1.JPrediction has the ability to generalize. Not as much as I would like, but out of 10 models obtained, 50-80% will be generalized models, the rest will be plum. They will be generalized in different ways, and a model with good learning curve may gain less in the future than a model with the worst learning curve.

2. The problem with choosing predictors is that I give her 100 of them, she builds a model of at most 9 and does it for 3 days on 3 cores. Logically, the more inputs the more parametric the model is and the more factors it takes into account, but in practice the simpler the model is, the better it works in the future than a model with the same training result but with more inputs. I consider models from 5 inputs and higher, because with a smaller number of inputs the effect occurs when the model is lucky for some time and then not, and as a rule this time is not great, because the model is too small.

I just tried everything with the data and even taught JPrediction fiercely retrain when I take the same data, make them retrain and the result of learning increases sharply within 20%. That is, it was 75% became 90% and at the same time a terrible drain on the CB.

Now they start to appear services for MO, I use AWS. There is a section of machine learning and model building. As a result I build my own model there and from the same file. On AWS, the quality of the trained model is many times worse, but it takes 5 minutes to build it. And there are not so many settings.

I very much want to run two identical files but in different AI systems and compare the result in the form of a traded section of the OOS, but on this forum, unfortunately, there are no specialists who have complete AI systems. Here everyone is still looking for.... alas...... Trickster!!!! You found???? Your AI.......

 

Those who read/study the book

google colab has GPU support

and TensorFlow is already installed.

The only tricky part is working with files via api and google drive


Video card performance test:

Time (s) to convolve 32x7x7x3 filter over random 100x100x100x3 images (batch x height x width x channel). Sum of ten runs. CPU (s): 9.76737689972 GPU (s): 0.161982059479GPU speedup over CPU: 60x

I have no idea how it is implemented, but it works :)

and here R of course sharply wo...l at once

 
Maxim Dmitrievsky:

Those who read/study the book

google colab has support for your GPU

and already installed TensorFlow

In short, you do not need to install anything at all, just go through the browser and work. The only hitch is working with files through the api and google drive


Performance test of my video card:

Time (s) to convolve 32x7x7x3 filter over random 100x100x100x3 images (batch x height x width x channel). Sum of ten runs. CPU (s): 9.76737689972 GPU (s): 0.161982059479GPU speedup over CPU: 60x

I have no idea how it is implemented, but it works :)


This is useful. What kind of lab is this?

 
Mihail Marchukajtes:

Now that's useful. What kind of lab is this?


Come on, Mikhail.

but it's a virtual machine and they use their own gpuha.)

Reason: