Discussion of article "Third Generation Neural Networks: Deep Networks" - page 3

 

To be continued.

4. Affinity propagation (AP) clustering, see http://dx.doi.org/10.1126/science.1136800 

> library(apcluster)
> d.apclus <- apcluster(negDistMat(r=2), x)
> cat("affinity propogation optimal number of clusters:", length(d.apclus@clusters), "\n")
affinity propogation optimal number of clusters: 34 (!?)
> heatmap(d.apclus)

5. Gap Statistic for Estimating the Number of Clusters. See also some code for a nice graphical

output . Trying 2-10 clusters here:

> library(cluster)
> clusGap(x, kmeans, 10, B = 100, verbose = interactive())
Clustering k = 1,2,..., K.max (= 10): .. done
Bootstrapping, b = 1,2,..., B (= 100)  [one "." per sample]:
.................................................. 50 
.................................................. 100 
Clustering Gap statistic ["clusGap"].
B=100 simulated reference sets, k = 1..10
 --> Number of clusters (method 'firstSEmax', SE.factor=1): 6

6. For high-dimensional data

#10  Also for high-dimension data is the pvclust library which calculates 
#p-values for hierarchical clustering via multiscale bootstrap resampling.
library(pvclust)
library(MASS)
> x.pc <- pvclust(x)
Bootstrap (r = 0.5)... Done.
Bootstrap (r = 0.6)... Done.
Bootstrap (r = 0.7)... Done.
Bootstrap (r = 0.8)... Done.
Bootstrap (r = 0.9)... Done.
Bootstrap (r = 1.0)... Done.
Bootstrap (r = 1.1)... Done.
Bootstrap (r = 1.2)... Done.
Bootstrap (r = 1.3)... Done.
Bootstrap (r = 1.4)... Done.
> plot(x.pc)
> lines(x.pc)
> pvrect(x.pc)
> seplot(x.pc, type="au")

> pvpick(x.pc)
$clusters
$clusters[[1]]
[1] "DX"  "ADX"

$clusters[[2]]
 [1] "DIp"    "ar"     "cci"    "cmo"    "macd"   "osma"  
 [7] "rsi"    "fastK"  "fastD"  "slowD"  "SMI"    "signal"

$clusters[[3]]
[1] "chv" "vol"


$edges
[1] 11 12 13

I got different results from 2 to 34 (!?). The last calculation with pvclust seems to me the most plausible results. Now we need to decide what to do with it

 

vlad1949

I got different results from 2 to 34 (!?). In the last calculation with pvclust it seems to me the most plausible results. Now I need to decide what to do with it

Dear Vlad!

I have not managed to get through the code you have described. So, if you can take me step by step.

The purpose of clustering.

From a certain set of predictors select those that have a relation and influence on a specific target variable. Moreover, each target variable, I emphasise each, not a set of them, have predictive power for a value within a class. I.e. for the "long-short" class, some of the predictor values have more relation to longs, for example, and some have more relation to shorts. I have already written that for the class "positive price increment - negative price increment" I could not find a single predictor that would have such a property.

It follows that clustering must separate a separate predictor into clusters, and this is clustering with a teacher. Clustering without a teacher is not interesting.

PS.

This problem statement has similarities with the inportance value generated by packages, such as rf, but without exception all similar values cannot be used. All these algorithms work fine on sets of predictors that do not have selective predictive power of every value in the class.

Somehow.

 
vlad1949:

I don't see any problems with a multicurrency Expert Advisor. If the Expert Advisor is multicurrency, it is even more convenient, because there are restrictions with indicators in the multicurrency Expert Advisor, but there are none in the Expert Advisor. If it is a multi EA, then calling R from each EA will create a new instance of R, and there are 32 such pairs in MT4 - up to my eyebrows.

Testing. Successful. It is true that it is very slow.

[Deleted]  
Everyone can appreciate one of the best implementation of deep neural network applications to date here with the example of image classification.
MetaMind Vison Labs - General Image Classifier
MetaMind Vison Labs - General Image Classifier
  • www.metamind.io
This demo allows you to use a state-of-the-art classifier that can classify (automatically label) an unseen image into one of 1000 pre-defined classes. How can I use this? Just drag and drop your images into the "Upload Your Image" button or click it to select a file from your computer. You can also simply copy and paste the url of an...
 
Can't you compare what's here with what's here? Reshetov's?
 

faa1947:
А нельзя ли сравнить то, что здесь, с тем, что здесь? У Решетова?

After that(But VMR is already much stronger than human) passage, I didn't read further.

And there is nothing to compare it with. I have not met the world-unknown theory and VMR(!?) neither on the Internet nor in articles.

 

vlad1949:


After this(But VMR is already much stronger than a human) passage, I didn't read further.

I haven't read Pasternak, but I condemn him © Popular saying

Well, nobody forces you to read if you don't like something. It's the Internet, not the compulsory school programme on literature.

Therefore, it is not necessary to report to someone about what you have not read. After all, if everyone starts to publish such reports, dyk no forum engine will not withstand it.

vlad1949:

And there is nothing to compare it with. I have not met the world-unknown theory and VMR(!?) neither on the Internet nor in articles.

It's a tough case. I am sorry for your loss.
 
Reshetov:

Therefore, it is not necessary to report to someone about something you have not read. After all, if everyone starts posting such reports, then no forum engine will be able to withstand it.

that's 5! I can't imagine a more subtle humour. :)
 

faa1947:
А нельзя ли сравнить то, что здесь, с тем, что здесь? У Решетова?

In all seriousness. It is not serious to compare the topic "Deep Learning" with what is given in the blog and proudly called "Theory". The first one was developed and continues to be developed by the efforts of two major universities. There are successful practical realisations. It has been tested by many people on real practical projects. There is an implementation in R. For me as a user this is the most important thing.

The second one is the development of a single person (probably a talented programmer) not yet brought to practical implementation. Ideas voiced in the blog can be productive, but this is work for researchers, not for users (traders). You can see from the comments that he is offended by misunderstanding of his great theory. This is normal. All inventors face this (misunderstanding). By the way, I had no intention to offend anyone.

Here is a suggestion: Discuss Reshetov's topic in his blog or in a separate thread (if he organises it).

Opinions and considerations on the topic of the article - "Deep Neural Networks" - are welcome here.

No offence.

Good luck

 
vlad1949:
I overreacted. I withdraw my offer.