
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
to forte928
В данный момент есть первый фактор на основании которого можно сделать вывод о боковом флете на паре евро доллар -
pair reached the consolidation level of OP line at 1.4850. Exactly the same fluctuations were observed at the points of 1.3437 and 1.3937 with the subsequent rollback that corresponds to the levels of 0.382, 0.618 and 1.00.
the second figure is the same chart but with calculated growth levels relative to the low - 1.4162 and current 1.4951 and if you take on the basis of this price chart levels 1.4951 and 1.4851 you can see that the price is just in the balance point at the average level of fluctuations of these indicators the last two days...Further, the indicator of saturation has long shown saturation level at which the reversal should occur
but there are a couple of things that do not allow this to happen :
1) the daily chart shows negative growth movement (bottom indicator)
2) The daily chart reached the consolidation level of 0.382 at 1.4877 on the first indicator
3) Daily chart reached consolidation level of COP at 1.4892 on the second indicator
4) There is active opposition to upward movement on the H4 chart
5) Presence of two consolidation levels relative to the low of the end of September OP and 0.236 (1.4931 and 1.4933), which is a strong indication of the presence of a prolonged correction
To be continued...
Thanks a lot for the clarification, I must say I gave up TA a few years ago (for myself), but it is always interesting to read competent analysis and compare it with my predictions. Could you clarify the term "consolidation level" for better understanding and to avoid confusion in terminology.
to lea
Have you ever tried to look for critical points of the time series?
No, I haven't, and so far I have no idea how to find them. I used such a notion as "time series memory". This is a somewhat specific term, it can be found in neural networks, fractal analysis, but you should always look at the context of its application. I mean the influence of historical counts on future realizations of the process. Simply put, this parameter answers the question "how long to take the historical series".
PS: By the way, I remember you promised to improve your linear library and post a new version...
to Yurixx
1. In MKL4 you can't operate with array which size isn't set. If you didn't specify its size when declaring the array, you would do it in init(). Further, while working, you can change its size as needed.
I don't get it here. I don't do that and everything works, I mean without initialization in init()
2. Lea's advice is quite practical, it's worth heeding. It's quite possible that you'll just want to allocate space and have a variable with the index of the last item. Then it really doesn't matter if you know the number of elements you need or not.
I don't think it's very practical, becausemore calculations will obviously have to be done. Additional loops could be added to the above. But all the same, I'd have to check it all, if only we could get some recommendations from the developers... :о)
In general, for the advice to be adequate, you'd better explain more precisely what the array is used for and why it's necessary to change its size.
For example (and this is the simplest example) to find local extrema (not on the graph) by the condition y[n]>y[n+1] and y[n]<y[n-1] and correspondingly for a minimum. I understand that it is possible to solve in several ways, e.g. like this:
- Create an array of the same length as the initial series, encode in it 0 and 1 the presence of an extremum.
- Perform the first iteration by calculating the number of extrema
- Recalculate the number of extrema
- Initialize the array with this value
- Write values into the new array
You can do it this way, you can do it that way, I'm trying to figure out the best way :o)I do not play thimbles. But I prefer variant 2 or maybe I just want the EUR to grow? :-)
As you noticed, I wasn't suggesting thimbles as such. It's all out in the open, i was just curious about your opinion (i read on a neighboring thread your prediction )
But options 1 and 3 are ok too, although they don't differ much from each other.
"Divergent vector" shift in average price value
to Urain
In my experience, I recommend to define and use arrays directly where they need to use these arrays are mostly local and use memory dynamically, which is better than static arrays, because they work slower than from the RAM, especially if the array is small, it makes no sense statically to reserve much space for them. The MQL-4 compiler is set up in such a way that you won't feel a difference between declaring an array with the explicit size and the delayed one.
It seems that there are no statistical/dynamic "pointers" to where an array is stored in MQL. There's only one initialization operator, the only question is that using it multiple times may slow down a large array. Or is it? Or maybe I'm missing something again?
to markeeteer
filter information from the forum
Oh, that's a valuable quality. I can assure you - I have the best adaptive filters. :о)
If you describe your task in details (you can write me in a private message), we (I) will figure out the best way to implement it.
I will think about it, but now I want to figure it out myself. After all, you should at least understand something in MQL, so that you could at least somehow explain the problem :o).
And there used to be such a quiet twig. (
>> I think it will stay that way. Colleagues are just still in combat mode :o)
to Yurixx
I don't really get it here. I don't do that and everything works, I mean without initialization in init()
Initializing an array is one thing, declaring a size is another. If you declare an array as Arr[], then one element is allocated in memory. You can work with it as much as you want, and the system will not tell you about the error when you address elements with number >0, but calculations will be incorrect. To make everything okay, you need to set a specific size using ArrayResize() operation. When allocating memory in this case all elements will be filled with zeros, so if you don't need anything special, you may even not initialize it (although a good style requires it).
.
I don't think it's very practical, as it's quite obvious thatmore calculations are involved. Additional loops can be added to the above. But anyway, we'd have to check it all, if only we could get some recommendations from the developers... :о)
Lea's advice doesn't lead to more calculations. Take it out carefully. And if you get the developers involved in this elementary issue, you'll be a hero. :-)
.
For example (and this is the simplest example) to find local extrema (not on the graph) by the conditions y[n]>y[n+1] and y[n]<y[n-1] and correspondingly for the minimum. I understand that it is possible to solve in several ways, e.g. like this:
This is called left hand over right ear. I do it all the time in my programs, but I work in one go to avoid wasting calculation resources and time. The law is: if you win in memory, you lose in time and computation. My personal view is that memory is less critical for trading. Therefore, you can easily create even two arrays of the same length and write the extremum value into one array and its coordinate into the other one as they are forming.
Sergei, you'd better start with the most complicated scenario. Otherwise I will not understand what the commotion is all about. :-)))
I advise to treat Urain's advice "to declare and use arrays directly where it is necessary to use them" with much caution. Using arrays is determined by the nature of the task, not by how to fight with the swap file.
Ну например (и это самый простой пример) поиск локальных экстремумов (не на графике) по условию y[n]>y[n+1] и y[n]<y[n-1] и соответственно для минимума. Я понимаю, что можно решить несколькими способами, например таким:
Это называется левой рукой за правое ухо. Я постоянно делаю это в своих программах, но работаю в один проход, чтобы не тратить зря вычислительные ресурсы и время. Закон такой: выигрываешь в памяти - проигрываешь во времени и вычислениях. Лично мне кажется, что память менее критична для трейдинга. Поэтому можешь смело создавать даже два массива такой же длины и писать в один значение экстремума, а во второй его координату, непосредственно по ходу их формирования.
to lea
No, I haven't looked for such points and have no idea how to find them yet. I have used such a notion as "time series memory". This is a somewhat specific term, it can be found in neural networks and fractal analysis, but you always have to look at the context of its application. I mean the influence of historical counts on future realizations of the process. Simply put, this parameter answers the question "how long to take the historical series".
PS: By the way, I remember you promised to improve your linear library and post a new version...
I see, thanks for the answer.
I basically brought the work on the library to a final stage two months ago (I threw out unnecessary functions, remade existing ones). Although, I still haven't done the matrix conditionality calculation. I'll be more free in about two weeks, and then I'll try to work it out.
I started to write an article at that time but did not have enough time. At present 50% of functions descriptions are ready (this is 6 groups of functions out of 16; for the time being I shall write only documentation on functions, the examples of their usage will follow).
I'm not very good with predictions so I decided to experiment with M1 today, starting on Monday:
I can't see the main chart :)
But over-optimisation every minute and a forecast for 3 hours ahead.
I'm not very good with predictions so I decided to experiment with M1 today, starting on Monday:
I can't see the main chart :)
But over-optimization every minute and forecast for 3 hours ahead.
How do you re-optimise? And where do you get your forecasts, not from the deductor by any chance?
How do you re-optimise? And where do you get your predictions, not from the deductor by any chance?
Poor Piboli, he's been asked four times already ^_^, yes, he makes his predictions at Deductorthe forecast is more or less the same (the one with the maximum entropy) :o) A small refinement, the following trajectories remain, with the one that is "channeled" being the most likely.
to lea
Понятно
I did my best :o)
Basically I've brought library to its final stage two months ago (took out unnecessary functions, remade existing ones). Although I still haven't done calculations of matrix conditionality.
But it says there
The code will be posted later.
But the version hasn't changed :o(