Machine learning in trading: theory, models, practice and algo-trading - page 2250

 
mytarmailS:

I read halfway through and stopped with a chuckle here.


random numbers. A neural network using these weights may have the correct input-output relationship, but why these practical weights work remains a mystery. This mystical property of neural networks is the reason why many scientists and engineers avoid them. Think of all the scientific fiction spread by computer renegades.


I think the author is very far from NS, even more far than I am))

The book is from '97, on the csos. Of the 33 chapters, this is the only one about grids. The conventional wisdom is that nets are black boxes. After all, it's a translation.

Read it diagonally, pretty interesting.

 
Rorschach:

The book is from '97, according to tsos. Of the 33 chapters, this is the only one about nets. The conventional wisdom is that nets are black boxes. After all, it is a translation.

Read it diagonally, pretty interesting.

I read it...

An idea was born, what if to make a discretely cosine convertor and make the grid choose those coefficients the sum of which will give a pure signal...

 
mytarmailS:

Read...

An idea was born, what if to make a discrete cosine converter and make the network select those coefficients, the sum of which will give a clean signal...

I tried that yesterday

the result is the same as with the LPF where alpha + betta = 1, and alpha or betta is less than zero...

then look at the graph - oh yes! this is reality, which is difficult to see at a glance

then compare two FFT graphs - imaginary and real part

tearing the hair on our heads, neighbors hear a multi-story mate.

 
mytarmailS:

Read...

An idea was born, what if to make a discrete cosine converter and make the network select those coefficients, the sum of which will give a pure signal...

In bpf the frequencies are selected from a certain grid. it would be interesting to find frequencies which are not bound to a certain step and to keep these frequencies as long as possible when shifting the window

 
Rorschach:

In bpf the frequencies are selected from a certain grid. it is interesting to find frequencies that are not bound to a certain step and that when the window is shifted these frequencies are preserved as long as possible

It is possible to find them, you do not need NS, but they (frequencies) will not work in the future...

but to make the signal "pure" leaving only a few important harmonics you can try

 

Fulfilled my long-standing desire to create a TS whose parameters will be controlled by a neural network


I finally made a simple TS. Two wheels, input by crossing the wheels, and the periods of the wheels are controlled by the neural network...

I obtained an adaptive filter)


First chart is the price

the second is the bead periods controlled by the neuronics

third balance

Training: Neuronka was trained to manage the periods so as to get the maximal profit...

I'll tell you right away it's a trace and without commissions...


The value of the scrip is in the experience of creating it for new more complex tasks...

 

Cool, I'm generating numbers.

then time series.


 
Maxim Dmitrievsky:

Cool, I'm generating numbers.

then time series.

profit to generate must learn))))))

 
mytarmailS:

profit generation must be taught))))))

then profit

you asked me how to generate series with patterns... that's what i'm doing

but it's a multi-step process

 
mytarmailS:

I don't know how to implement it.... maybe there is a simpler kind...

I want to create a network, the purpose of which is to take market quotes at the input, and at the output to produce a more "predictable" series

But I need a measure of "predictability"

mapping the series into another space (distribution?) where more predictable parts will be closer to the mean

Curve the coders.

Let's say in batches of 100-500 increments, with labels. Then you pull the top of the distribution from the decoder, sampling

You can put entropy estimates as labels, then from conditional decoder you take the top of class, for which the lowest entropy is defined. Well, this is some kind of samurai way

why do you get the most predictable ones there? it's because

Reason: