You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I can make a really crazy point.
I don't think you will argue that neural networks are much, much "dumber" than the human brain.
Well, using them in trading just makes the whole picture so much simpler.
IMHO...
I have a crazy idea.
I don't think you will argue that neural networks are much, much "dumber" than the human brain.
Well, using them in trading just makes the whole picture so much simpler.
IMHO...
Why are you so obsessed with these neural networks? What do you want, forex or neuro? :)
What do you mean by "simplifies the picture as a whole"?
When the brain analyses, many more neurons are involved. Associations, memory, motivation, visual perception, all much more complex than a neural network. That's what I put in the concept of the big picture.
If part of these actions can be done by a neural network, the system as a whole will become simpler, and thus in your opinion better.
A forum thread about neural networks.
I'm not hung up on them, but I think there's a prospect of using them.
I don't see the point in pouring the dough, purely on a "I think" basis. I want to find support in tehanalysis and mathematics. In short, to "make a straw".
See tab 24 in this thread 3rd post...
When the brain analyses, many more neurons are involved. Associations, memory, motivation, visual perception, all much more complex than a neural network. That's what I put in the concept of the big picture.
If a neural network can do some of these things, the system as a whole will be simpler, and therefore, in your opinion, better.
It is not clear how forex snark or some financial time series can become simpler or better when an artificial neural network with fewer neurons than the brain is used? How will actions be shared between the brain and the neural network?
To summarize all my statements about neural networks, I can tell all lovers of neurofintifluffs:
There are algorithms that are simpler and no less effective than these proverbial networks.
It is not clear how an artificial neural network with fewer neurons than a brain can simplify or get better with an artificial forex or some kind of time series?
An analogy begs to be made: the clever, sharp-sighted, multi-neuralist stared into the "object" and saw many, many coloured dots. And the dumb and visually impaired approached. He could not see the dots, they were beyond him. But he saw the perspective "in general" in the form of "portrait of a young man", for instance... While studying the material I got the impression that the simpler, the better. That "lack" is better than "excess". More often it turns out that scarcity (within reasonable limits, of course) makes the grid learn, while excess is memorized. My personal experiments (purely research, read, laboratory work) have shown that if there is a pattern in the data fed to the net, then a simple single-layer perceptron learns in just a few epochs. But if the data are tight, we can put in at least ten layers and a couple of Elman (Jordaan) layers and a gradient from Levenberg. And still nothing will come out of it. I should correct Debugger - input data are important. But they are important not by themselves but necessarily "in relation" or "relatively". But as a special case described by Debugger it also has the right to exist. And "raw" prices may well be shoved into the grid. For example, approximators and regressors work with them.
2 TimeMaster. I agree with you. When I started digging the neuro subject myself, as I remember it now, I had no problem with the question of what to teach. The main question was how to teach. As a result of mastering the subject, the main question dissolved, the first dissolved. At the moment I know how to teach, but I have no idea what to teach (no idea).
Here's an example... Although, I have some doubts, demonstrating this example. I'm embarrassed to be so sweepingly condemned... Oh, come on. This is a "lab". It's a standard (no more standard) single-layer perceptron.
The point here is simple - it's a multiplication table )). I have made by hand this table where I multiplied all digits from 1 to 9 (from 1x9 to 9x1), i.e. I got 81 examples. I moved 16 examples to a separate file. The first file is fed to the grid as input (65 examples), the second one as crossvalidation (16 examples). What is crossvalidation? It's a test of the grid's trainability on unknown data immediately during training. The left graph is training. The right graph is crossvalidation, i.e. running the grid on data that the grid has never seen. And what do we see? Crossvalidation is perfect. That is, the grid has found perfectly correct answers for works unknown to it. In other words, the grid HAS learned. Thus the first conclusion is that the grid learns. The second conclusion is that the "net does learn", so if the "net does NOT learn", the problem is not in the "net" at all. Alas...