Machine learning in trading: theory, models, practice and algo-trading - page 2451

 
Alexei Tarabanov #:

It's all ahead of us.

What do you mean?

 

My granddaughter is learning Farsi and Chinese. She also asks for Japanese so she can spy on them.

 
Alexei Tarabanov #:

My granddaughter is learning Farsi and Chinese. She also asks for Japanese so she can spy on them.

Cool of course, but not forward-looking (parents' fault), it would be better to learn programming languages, it also develops thinking...

Translators are very good now, in 10 years I think it will be possible to implant chips into the brain with many benefits, including translation of all languages, to connect to the brain have learned, it is only a matter of time ...

So learning many languages is like dreaming to become a pilot, a truck driver, a cab driver, not noticing the fact that the Tesla is already passing by on full autopilot... Very soon many professions will be gone forever, and you have to think about it...

 
On a first-name basis with Python
 
Alexei Tarabanov #:
On a first-name basis with Python

cool

 
Maxim Dmitrievsky #:
Half of the neuron activations are for class 1, half are for another class. Based on such primitive logic. If skewed, perhaps the classes are poorly balanced. And extreme values seem to cause the gradient to explode or fade

Max, don't jump to conclusions.

The word "maybe" in your post suggests that you haven't thought about that formulation of the question, have you?

The neural network in general and MLP in particular is a very flexible thing, and the same set of features can be partitioned equally by the same network but at different values of neuron weights.... Right? - So the question arises, which of these variants of the set of weights is more robust?

And with the second, who responded to my post, I do not think it is necessary to maintain a dialogue anymore - it is pointless.

 
Alexei Tarabanov #:

Zhi, shi are spelled with an i.

There are exceptions. ;)

 
Andrey Dik #:

Max, don't jump to conclusions.

The word "maybe" in your post suggests that you have not thought about such a statement of the question, right?

The neural network in general and MLP in particular is a very flexible thing, and the same set of features can be partitioned equally by the same network but at different values of neuron weights.... Right? - So the question arises, which of these variants of the set of weights is more robust?

And with the second, who responded to my post, I do not think it is necessary to maintain a dialogue anymore - it is pointless.

Don't be absurd. You were correctly answered that you should choose a model on the test. And even better at cross-validation or valving-forward.

Although experience is gained by practice... Study) Then you'll come to the tests.

 
elibrarius #:

Don't be absurd. You were rightly told that you have to choose a model on the test. And even better on cross-validation or valving-forward.

Although experience is gained by practice... Then you'll come to the tests.

on the test...? The test is the same as the derivative of a function, can be the same curve, tangent at the same point but to two different functions.

I don't want to offend any of the old-timers in this thread, but you should know the basics after so many years. And about experience - it's ridiculous, right.

 

.