Machine learning in trading: theory, models, practice and algo-trading - page 1703

 
mytarmailS:

when you think (your intellect solves a problem) do you have to communicate with someone at that moment?

You still can't get your "worm" definition of intelligence out of the way, so we're communicating in different languages now.

Perhaps my understanding of AI is lame. But, in general, AI is precisely a system that interacts with humans. If AI doesn't interact with humans by providing them with a convenient voice and visual interface, but rather works in stealth mode, then it's just a program. Isn't it?

And a program can run on neural networks, but not be AI.
 
Tag Konow:
Doesn't such over-demanding data seem like a disadvantage? I've heard NSs recognizing road signs get it wrong if they have a small sticker on the side. Maybe that super-sensitivity isn't necessary?

This is a little different. When you make a teaching mistake, you are essentially teaching her something you don't know about. Remember the NS is like a child, she needs to be told everything thoroughly what you want her to do. After all, the result you will interpret in terms of their ideas about learning, and in reality they were blurred, as an option.

Tip of the day. Place the input vectors as a sphere from the origin of the coordinate system, this allows you to achieve unambiguity and avoids inconsistency.

 
Tag Konow:
Perhaps my understanding of AI is lame. But in general, AI is a system that interacts with humans. If the AI does not interact with humans by giving them a friendly voice and visual interface, and works in stealth mode,then it's just a program. Isn't it?

YES!!!!!

That's exactly what I'm saying!




Intellect is a self-modifying, self-adjustingalgorithm of choice, transformation of information, as a result of which information modules emerge that were not previously known to the subject and did not come to it in a ready-made form from the outside



This is not the mind, it is the only known way of thinking, and you are confusing it with the mind.

You confuse the saddle of a bicycle with the bicycle itself and want the saddle to ride!

 
Mihail Marchukajtes:

This is a little different. When you make a teaching mistake, you are essentially teaching her something you don't know about. Remember the NS is like a child, she needs to be told everything thoroughly what you want her to do. After all, the result you will interpret within the framework of your ideas about the training, and in reality they were blurred, as an option.

Tip of the day. Place the input vectors in the form of a sphere from the origin of the coordinate system, this allows you to achieve unambiguity and eliminate inconsistencies.

Ok. I know that NS is trained to catch a recurrent invariant in the data. This is essentially a statistical approach. So why does a small, one-time error have such a significant impact in training? A person won't notice and forget, but the network's training will break down?
 
Tag Konow:
Ok. I know that NS is trained to catch a repeating invariant in the data. It is essentially a statistical approach. So why does a small one-time error have such a significant impact in training? A person won't notice and forget, but the network's training will break down?

A mistake is not just a mistake. A small mistake can have a big impact.

And the NS is not being asked to pick up repetitive data. It is being asked to identify hidden patterns to get the right result in the absence of repetitive data. Understanding generalization. When we have a finite domain of data, but only have 50% of that data. The network learns and identifying a pattern can build the rest of the data that it hasn't seen. It's like reconstructing old video footage with missing pixels, which the network then draws up on its own.

 
mytarmailS:

YES!!!!!

That's exactly what I'm saying!




Intellect is a self-modifying, self-adjustingalgorithm of choice, transformation of information, as a result of the action of which information modules emerge that were not previously known to the subject and did not come to it in a ready-made form from the outside



This is not the mind, it is the only known way of thinking, and you are confusing it with the mind.

You confuse the saddle of a bicycle with the bicycle itself, and want the saddle to ride!

I'm not confused. I clearly separated the program on conventional algorithms, on neural networks, AI and Reason.

AI, in contrast to the closed program on neural networks, interacts directly with humans, and is programmed by them in the course of this interaction. And the program on the NS, works in closed mode and is limited in the perception of external data.

The definition of Intelligence is correct.
 
Aleksey Vyazmikin:

Probably bypasses, but there, in contests, the sampling is stationary, there are not particularly trashy features, i.e. the conditions are not those with which we work, and I just think how best to prepare data with these features in mind. (No solution yet in final form, but it's an important task).

The different tree models are good, but so far it is not possible to upload them to a separate file, and thus it is not possible to embed them in the Expert Advisor, which is bad.

I gave you a link to the viewing of splits from JOT data. That is where the full model is downloaded to a file. Then the splits are read from it.

Aleksey Vyazmikin:

I don't like lack of postprocessing in boosting - when after training is completed, the model is simplified by discarding weak trees. I don't understand why they don't do this.

In boosting, by definition, all trees are important. Each successive one refines all the previous ones. If you throw out one tree in the middle, all that follow will work with incorrect data - they must be retrained, without taking into account the thrown out tree. The first tree will very closely replicate the discarded tree.

Aleksey Vyazmikin:

Leaves of individual trees in boosting are weak - small completeness - less than 1% and it is bad that this parameter can not be adjusted,

Yes. Individual leaves in boosting are incomplete because they are augmented by responses from leaves from other refining trees. And only the aggregate of answers, e.g. 100 trees, gives the correct answer.
Trying to get something reliable from a single leaf of the boosting model is impossible.
In boosting, all 100 answers from 100 trees are summed up, each giving for example 0.01 in total = 1. The value of 1 leaf = 0.01 - what do you want to get from it? There is nothing in it. Only the sum of 100 leaves will give the correct answer.
In fact, there the 1st tree is strong and gives for example 0.7, the rest bring the sum closer to 1. If only the leaves of the first tree are considered separately, but I think they are weaker than any tree from the random forest, due to less depth.
The random forest has an average, e.g. every leaf of 100 trees = 1, the average also = 1. In it, the foxes are complete, but with random variation. But a crowd of 100 answers, gives the average as a fairly accurate answer.

 

Like a real trader, I got two moose and made me re-shoe my model. Trading is not grateful :-)


 
Tag Konow:
I'm not confused. I clearly separated the program on conventional algorithms, on neural networks, AI and Reason.

The AI, unlike a closed program on neural networks, interacts directly with humans and is programmed by them in the course of this interaction. And the program on the NS works in closed mode and is limited in the perception of external data.

I give up ...

 
In general, the interaction between man and machine is called an interface, which is developed by the rules of ergonomics, but this is another les......
Reason: