Downloading webpage source - trading from information online

 

I am trying to create an EA to trade from a signals service provider - eventually anyway. For now I am trying to program just the functions to read in the necessary information. Two approaches I am considering:

1. The alerts come via email, I can download these into either one .mbox file or separate .eml files. Copy to /experts/files folder, read the information, and get the EA to delete the file afterwards and wait for a new file.

2. The signals are also posted online on ..../signals.php. Viewing the source of the page reveals the info to be in a nice table format which is easier to parse than the email (or so I think, since I haven't attempted either before).


I read an article here 'Displaying a News Calendar', which explained about how to get Getright to download a .csv file regularly, from which information can be read by the user program. The site I am using requires login information (which I have). I tried configuring getright the usual way, putting the login form page address (which is one level back from the page I am trying to download) and entering my credentials but it doesn't seem to work.


When you click 'view source' in firefox etc, does firefox convert the .php file into? Does the htm actually exist on the server? Can you think of another to get Getright to login, or any other way for me to download the webpage automatically in the same was getright can(could in theory)? I am not against method 1 but so far I have had difficulty writing/setting up scripts to automatically execute every 5mins or so, and calling fetchmail makes command windows pop up etc.


Any help would be much appreciated.

 
whitebloodcell:

I am trying to create an EA to trade from a signals service provider - eventually anyway. For now I am trying to program just the functions to read in the necessary information. Two approaches I am considering:

1. The alerts come via email, I can download these into either one .mbox file or separate .eml files. Copy to /experts/files folder, read the information, and get the EA to delete the file afterwards and wait for a new file.

2. The signals are also posted online on ..../signals.php. Viewing the source of the page reveals the info to be in a nice table format which is easier to parse than the email (or so I think, since I haven't attempted either before).


I read an article here 'Displaying a News Calendar', which explained about how to get Getright to download a .csv file regularly, from which information can be read by the user program. The site I am using requires login information (which I have). I tried configuring getright the usual way, putting the login form page address (which is one level back from the page I am trying to download) and entering my credentials but it doesn't seem to work.


When you click 'view source' in firefox etc, does firefox convert the .php file into? Does the htm actually exist on the server? Can you think of another to get Getright to login, or any other way for me to download the webpage automatically in the same was getright can(could in theory)? I am not against method 1 but so far I have had difficulty writing/setting up scripts to automatically execute every 5mins or so, and calling fetchmail makes command windows pop up etc.


Any help would be much appreciated.


whitebloodcell,

For downloading news file for my news trading expert I use File Downloader instead of GetRight. It also supports authentication but never tried it. You can get this command prompt downloader from http://noeld.com/programs.asp and check the readme file. There are good examples how to use it. If you find it useful, put file download.exe into your Windows folder and then call it from MQL with ShellExecuteA(0, "Open", "download.exe", DownloadCmd, "", 0); where DownloadCmd will be a string containing your URL, user, password, etc..

Good luck!

Reason: