Lorentzos Roussos
- Information
9+ Jahre
Erfahrung
|
0
Produkte
|
0
Demoversionen
|
206
Jobs
|
0
Signale
|
0
Abonnenten
|
💎 𝗦𝗼𝗺𝗲 𝗼𝗳 𝗺𝘆 𝘁𝗼𝗼𝗹𝘀
🔹 Harmonic Patterns Scanner .(EA ,one chart ,multipair ,multitimeframe).Laden Sie die 𝟭𝟬𝟬% sofortigen + kostenlosen Demos unten herunter, um sie zu testen (𝗼𝘂𝘁𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝘁𝗲𝘀𝘁𝗲𝗿).
-Demo für MT4: https://c.mql5.com/6/908/Voenix_Demo_MT4.ex4
-Demo für MT5: https://c.mql5.com/6/908/Voenix_Demo_MT5.ex5
🔹 Marktprofil- oder Volumenprofilanzeige für MT4, wie zuvor ist eine 100% sofortige + kostenlose Demo verfügbar (𝗼𝘂𝘁𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝘁𝗲𝘀𝘁𝗲𝗿). Die Demo hat einen Begrüßungsbildschirm, der ebenfalls nach 30 Sekunden verschwindet.
-Demo für MT4: https://c.mql5.com/6/888/ForexMarketProfile_Demo.ex4
💎 𝗖𝗼𝗻𝗻𝗲𝗰𝘁
🔹 Telegrammkanal:
https://t.me/lorentzor
🔹 Youtube-Kanal:
https://www.youtube.com/channel/UCM0Lj06cAJagFWvSpb9N5zA
🔹ForexFactory:
https://www.forexfactory.com/lorio
🔹 Harmonic Patterns Scanner .(EA ,one chart ,multipair ,multitimeframe).Laden Sie die 𝟭𝟬𝟬% sofortigen + kostenlosen Demos unten herunter, um sie zu testen (𝗼𝘂𝘁𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝘁𝗲𝘀𝘁𝗲𝗿).
-Demo für MT4: https://c.mql5.com/6/908/Voenix_Demo_MT4.ex4
-Demo für MT5: https://c.mql5.com/6/908/Voenix_Demo_MT5.ex5
🔹 Marktprofil- oder Volumenprofilanzeige für MT4, wie zuvor ist eine 100% sofortige + kostenlose Demo verfügbar (𝗼𝘂𝘁𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝘁𝗲𝘀𝘁𝗲𝗿). Die Demo hat einen Begrüßungsbildschirm, der ebenfalls nach 30 Sekunden verschwindet.
-Demo für MT4: https://c.mql5.com/6/888/ForexMarketProfile_Demo.ex4
💎 𝗖𝗼𝗻𝗻𝗲𝗰𝘁
🔹 Telegrammkanal:
https://t.me/lorentzor
🔹 Youtube-Kanal:
https://www.youtube.com/channel/UCM0Lj06cAJagFWvSpb9N5zA
🔹ForexFactory:
https://www.forexfactory.com/lorio

Lorentzos Roussos
From version 3770 your users will have to be logged in their terminals for the products to operate .
Expect an increase on post engagement.
If they are logged in constantly they will get more message notifications , will lead to more browsing , more replies , more downloads etc.
It may also lead to unsubscriptions from groups+channels if you spam because now they are more likely to see it.
So reduce spam , and update your products strategically (they get notifications for that too).
Great move by mq .
PS : also in your product's comment sections , prefer to answer publicly rather than dm users if theres more than one commentors (i think they receive notifications for that too).
In conclusion , if spam was your strategy so far you need to drop it.
PS2 : For updating your products strategically you will have to test what works . On one hand you have the terminals that are online only during weekdays , and on the other hand those that are constantly online . Truth is if you update on a weekend its more likely to propagate wider in notifications but you'll have to test . You can adjust the "hour" of the update however ,regardless, based on where most of your downloads are coming from.
Expect an increase on post engagement.
If they are logged in constantly they will get more message notifications , will lead to more browsing , more replies , more downloads etc.
It may also lead to unsubscriptions from groups+channels if you spam because now they are more likely to see it.
So reduce spam , and update your products strategically (they get notifications for that too).
Great move by mq .
PS : also in your product's comment sections , prefer to answer publicly rather than dm users if theres more than one commentors (i think they receive notifications for that too).
In conclusion , if spam was your strategy so far you need to drop it.
PS2 : For updating your products strategically you will have to test what works . On one hand you have the terminals that are online only during weekdays , and on the other hand those that are constantly online . Truth is if you update on a weekend its more likely to propagate wider in notifications but you'll have to test . You can adjust the "hour" of the update however ,regardless, based on where most of your downloads are coming from.

Lorentzos Roussos
I don't understand the claim
"The modern way of life is stressful for humans"
Lets go back 10000 years , you are in an area in the woods with other people
1st : There are a million things that can kill you and you are unaware of them
2nd : You must be ready for a predator or a known threat at all times
3rd : Your only source of heat also declares your location to the threats
4th : During the day half the tribe has to venture out and get food , if you are venturing out you are exposed if you stay behind you are outnumbered and must be more alert
Stress is why we got here , its in our dna . At some point there must have been a human that saw a tiger and thought "What a cute cat let me pet it"...probably extinct.
So if we had a neural network and a genetic algorithm stress would be a mechanism in between , a tendency or an urge to do something . Wonder how it translates to code.
Being stubborn or always choosing the new thing , or in betweens can be in a network's nodes with the learning rate mechanism , but stress is totally different.
What is stress though ? It means that you know what must be done if something happens or you don't know the environment at all , in machine learning this is an absurd mechanism because you are placing the "machine" in an environment it does not know but has access to all the answers .
Could it be that if you find 2 training samples that have similar or close features and lead to different outcomes could these samples have a "higher" level of "stress" , and eventually when the network is called to decide between them it can use the stress as a feature so that it can "enter" a different mode (but without memorizing anything either)?
And then the problem is that when it tries to forecast it does not have the stress feature input because we don't know (or it) the answer yet . So , we then deploy a "stress map" and measure stress by the similarity of the features to stressful sample features .
In other words it "thinks" : "wait a minute i have seen a big cat hunting Bob before maybe i should be silent"
"The modern way of life is stressful for humans"
Lets go back 10000 years , you are in an area in the woods with other people
1st : There are a million things that can kill you and you are unaware of them
2nd : You must be ready for a predator or a known threat at all times
3rd : Your only source of heat also declares your location to the threats
4th : During the day half the tribe has to venture out and get food , if you are venturing out you are exposed if you stay behind you are outnumbered and must be more alert
Stress is why we got here , its in our dna . At some point there must have been a human that saw a tiger and thought "What a cute cat let me pet it"...probably extinct.
So if we had a neural network and a genetic algorithm stress would be a mechanism in between , a tendency or an urge to do something . Wonder how it translates to code.
Being stubborn or always choosing the new thing , or in betweens can be in a network's nodes with the learning rate mechanism , but stress is totally different.
What is stress though ? It means that you know what must be done if something happens or you don't know the environment at all , in machine learning this is an absurd mechanism because you are placing the "machine" in an environment it does not know but has access to all the answers .
Could it be that if you find 2 training samples that have similar or close features and lead to different outcomes could these samples have a "higher" level of "stress" , and eventually when the network is called to decide between them it can use the stress as a feature so that it can "enter" a different mode (but without memorizing anything either)?
And then the problem is that when it tries to forecast it does not have the stress feature input because we don't know (or it) the answer yet . So , we then deploy a "stress map" and measure stress by the similarity of the features to stressful sample features .
In other words it "thinks" : "wait a minute i have seen a big cat hunting Bob before maybe i should be silent"

Lorentzos Roussos
The current version of "homo sapiens" is based on the generation that was based on tribal behavior . Besides the intellect , let's say , besides the processing power it has (it=human) the tribal tendencies helped it overcome certain difficulties and amplified the common "intellect" with sharing of knowledge and information.
Because our evolution happened on harsh conditions (still harsh but we have tamed them) it is possible there might have been "hominids" with higher "processing power" than us but they had the disadvantage of being isolated because they did not evolve tribal behavior , or , because the acquisition of knowledge of the many outpaced that of the "uber-hominid" .
Or the Uber Hominid was weak (you know sort of like a natural nerd) and vulnerable and isolated ,so a tribe of those could not compete with our tribe
In other words , what could we have killed as the surviving generation of this biological machine ?
Because our evolution happened on harsh conditions (still harsh but we have tamed them) it is possible there might have been "hominids" with higher "processing power" than us but they had the disadvantage of being isolated because they did not evolve tribal behavior , or , because the acquisition of knowledge of the many outpaced that of the "uber-hominid" .
Or the Uber Hominid was weak (you know sort of like a natural nerd) and vulnerable and isolated ,so a tribe of those could not compete with our tribe
In other words , what could we have killed as the surviving generation of this biological machine ?

Lorentzos Roussos
This will be the next neural network experiment :
1.In blue : We will create an autoencoder for inputs , hoping to give it an "understanding" of the inputs domain in the squeeze , cyan box.
2.In green : We will create an autoencoder for the outcomes(!) , hoping to give it an "understanding" of the outcomes domain in the squeeze , green box.
3.We will take the left side of the inputs autoencoder , and , the right side of the outcomes autoencoder . We will also include their "understandings" and then our job is to train only the bridge between the 2 understandings (in light yellow) without touching the rest .
For example , if you gave one network the task of recreating photos and one other network the task of recreating descriptions of photos , and you had photos with descriptions in your training set , and then you frankensteined them like so , you could get a network that receives a description and creates a photo.
Why ? so that you don't have to spend time trying to figure out how to tune the "understanding" in order to get the outcomes you want (like done in the autoencoder example here : https://www.mql5.com/en/blogs/post/752382
☕️
1.In blue : We will create an autoencoder for inputs , hoping to give it an "understanding" of the inputs domain in the squeeze , cyan box.
2.In green : We will create an autoencoder for the outcomes(!) , hoping to give it an "understanding" of the outcomes domain in the squeeze , green box.
3.We will take the left side of the inputs autoencoder , and , the right side of the outcomes autoencoder . We will also include their "understandings" and then our job is to train only the bridge between the 2 understandings (in light yellow) without touching the rest .
For example , if you gave one network the task of recreating photos and one other network the task of recreating descriptions of photos , and you had photos with descriptions in your training set , and then you frankensteined them like so , you could get a network that receives a description and creates a photo.
Why ? so that you don't have to spend time trying to figure out how to tune the "understanding" in order to get the outcomes you want (like done in the autoencoder example here : https://www.mql5.com/en/blogs/post/752382
☕️



Lorentzos Roussos

In this blog i'm exloring how the local memory operates with regard to a work group (of work items). We create a simple kernel that will export IDs , global id, local id, group id of a work item...

Lorentzos Roussos
OpenCL + Nvidia , Oregon state university. ☕️
https://web.engr.oregonstate.edu/~mjb/cs575/Handouts/gpu101.1pp.pdf
https://web.engr.oregonstate.edu/~mjb/cs575/Handouts/gpu101.1pp.pdf

Lorentzos Roussos

Hi. Short version : 1.CLGetInfoInteger(kernel,CL_KERNEL_WORK_GROUP_SIZE) ; will give you the number of kernel instances that can run at the same time in the device (or the compute unit) (tested with 1CU gpu only) It will also be the max # of items in a group possible, 2...

Lorentzos Roussos
beautiful activation function curve .
The derivative is the activation value , that will cover any extra calculations that keep the input range from -5 to 5
😍☕️ (ps:don't forget the normalization derivative)
The derivative is the activation value , that will cover any extra calculations that keep the input range from -5 to 5
😍☕️ (ps:don't forget the normalization derivative)


Lorentzos Roussos
default execution vs open CL + GPU .
OpenCL 5x , and the algorithm is garbage still !
The top of the line GPU out there is 90x faster than the one that did this!
Source code here : https://c.mql5.com/3/407/imageBlurTest__1.mq5
Thread here :https://www.mql5.com/en/forum/446275
Thanks to William Roeder
[Wait for the Gif to load 😇]
OpenCL 5x , and the algorithm is garbage still !
The top of the line GPU out there is 90x faster than the one that did this!
Source code here : https://c.mql5.com/3/407/imageBlurTest__1.mq5
Thread here :https://www.mql5.com/en/forum/446275
Thanks to William Roeder
[Wait for the Gif to load 😇]

Lorentzos Roussos

Hi there . First things first , watch the following video : Good , now , this is my first openCL program so there may be issues in terminology etc but the goal is to have the simplest example possible , not only because its helpful but also because that's all i can do for now 😇...

Lorentzos Roussos
Header file with all activation functions and all derivation functions from metaquotes .
Dive in : https://c.mql5.com/3/406/VectorActivationFunction.mqh
From here : https://www.mql5.com/en/forum/445076#comment_46333973
☕️
Dive in : https://c.mql5.com/3/406/VectorActivationFunction.mqh
From here : https://www.mql5.com/en/forum/445076#comment_46333973
☕️
Lorentzos Roussos

Warning : this is my first attempt at an autoencoder i've seen a video or 2 on autoencoders so i might butcher this With that out of the way let's pop a beer open and grab the keyboard...
Lorentzos Roussos

Read Part 1 : The simplest XOR gate neural network Okay . What do we have : nodes , layers and the net and the feed forward functions . So what does that mean in the spectrum of "a problem...
Lorentzos Roussos

XOR Gate example neural network in mql5 , as simple as possible 🍺 I'll assume you know what the XOR gate "problem" is . A quick refreshing pro schematic : The first 2 columns are the 2 inputs and the third column is the expected result from that operation...

Lorentzos Roussos
Node trying to separate the yellow class samples from the red class samples (dot=sample) 😂
That means :
a.i'm terrible in math 😂
b.the network believes "if you slap it around it will work" 😂(on a serious note , the display is the "world" the network creates )
back to the drawing board...
(90 features one node)
That means :
a.i'm terrible in math 😂
b.the network believes "if you slap it around it will work" 😂(on a serious note , the display is the "world" the network creates )
back to the drawing board...
(90 features one node)


Eugen Funk
2023.04.01
Nice animation. Btw, you won't be able to separate them as long as you use X and y only as "features".

Lorentzos Roussos
2023.04.01
You mean i can't separate them on a 2 dimensional spectrum . (features are 90) but its one layer deep

Lorentzos Roussos
network with 1 "radial node" and 2 features . Its job is -supposed- to be , to take all the green dots and place them at the top right corner and all the red dots at the bottom left corner . it tried before collapsing . 😂☕️
(collapses to a point in the center 0,0 , math related probably)
(collapses to a point in the center 0,0 , math related probably)


Lorentzos Roussos
2023.04.04
what is interesting to note here , if you wonder why the "altered spectrum" is so uniform , there was an error that lead to the weights being initialized with the same values . So i guess if you want to maintain order in samples but also twist the spectrum , same values on the weights . Otherwise it looks like the other gif posted after this

Lorentzos Roussos
The quantum superposition : "a state is derived when it is observed".
This could be the compression mechanism of whatever our universe is running on.
Imagine you have a network with 1000 inputs . With the autoencoder method you create a layer of 25 neurons and an output of 1000 neurons again . You train the network so as the 1000 outputs are as close to the 1000 inputs as possible . Then if you take the network starting from the 25 neurons and you feed a value to it you can describe the entire data set (kinda) that the original network trained on. So all the "states" are there and what you need is returned . The 25 neurons and the outbound weights are the superposition , and all outcomes are possible without all the outcomes being stored.
For instance , you take price action of all assets and you train an autoencoder . When you have the autoencoder only and adjust its trained layer it will create another possible -in theory realistic- chart .
The universe is very efficient , the question is are we the training set ? 😎
This could be the compression mechanism of whatever our universe is running on.
Imagine you have a network with 1000 inputs . With the autoencoder method you create a layer of 25 neurons and an output of 1000 neurons again . You train the network so as the 1000 outputs are as close to the 1000 inputs as possible . Then if you take the network starting from the 25 neurons and you feed a value to it you can describe the entire data set (kinda) that the original network trained on. So all the "states" are there and what you need is returned . The 25 neurons and the outbound weights are the superposition , and all outcomes are possible without all the outcomes being stored.
For instance , you take price action of all assets and you train an autoencoder . When you have the autoencoder only and adjust its trained layer it will create another possible -in theory realistic- chart .
The universe is very efficient , the question is are we the training set ? 😎
: