neural network and inputs - page 37

 

From what I've read, you seem to like Hartley's transformation a lot. I wonder why? I cannot comment on your preference, but I think it is very complicated. I think it is easier to use decomposition of series into trend/season, make a forecast for each separately and return the sum to the chart.

IMHO.

I already wrote above a neural network or an ensemble of neural networks is not the best model for forecasting, especially for non-stationary series.

Today there are more efficient models.

Predicting future values of a series (regression) is tempting but thankless. I spent a lot of time on it without any tangible result.

Now only classification, the result is excellent.

Good luck.

 
vlad1949:

From what I've read, you seem to like Hartley's transformation a lot. I wonder why?

Now just the classification, the result is excellent.



No particular preference for Hartley, chose it as being easier to use, without the imaginary component.

Classification is power. I'm thinking of using classification to divide up a training database into several parts, each part with its own class.

I used ensembles of neural networks in my experiments because I wanted to improve accuracy of the forecasts, in the future I think I shall use ensembles for training on the classified training bases.

Thank you for your interest.

 

Responded in your thread.

"Application of neural network ensembles in excrement...." typo???

Good luck

 
Corrected.
 
vlad1949:
No. Ensemble is better than DT,mlp and svm. The RF and ada figures are given next and they are better.

For the sake of clarity it's probably better to take something simpler...
...let's say a classic Fisher's Iris and look and see... + approximate calculation time...
(setosa = 1,virginica = 0,versicolor = -1 (other values are also possible))
Colour=input...black=ouch (iris view) on 1 screen...next...
On all screens blue = ouch... pink = model output...

50% graf= ob sampling...50%= test sampling...


Files:
 
kk (koh cards)
15x15=2-3 sec


20x20=3-4 sec

Mbs (bl neighbour method)=1 sec or less


 
Ns (neural network-skr layers))

3x-sec (4-2-1)=20 sec or more



This is where it gets interesting...
We can see the chatter at 0 (in our example we can remove it with a simple action (filter)
so as not to use heavier ns in the future)
Probably something similar happens to you... accordingly there is a worse error calculation ...
I do not know what the Assignment Ratio in R is and how it is calculated, and I may be wrong...

heavier ns...
3x scrsl (8-4-2)=30 sec or more
better cut...

etc....


======================

Bottom line...

1. Similarly successful solution of classification problems is possible using different algorithms.
2.the problem solution time depends on the applied algorithm

3.to classify data head-on, it's better to use algorithms specially designed for it...

 
Vizard:

It's probably better to use something simpler to illustrate the point...
...let's say a classic Fisher's Iris and look and see... + approximate calculation time...
(setosa = 1,virginica = 0,versicolor = -1 (other values are also possible))
Colour=input...black=ouch (iris view) on 1 screen...next...
On all screens blue=ouch... pink= model output...

50% graff=objective sampling...50%=testing...



Here we go. Shall we classify the irises? Take specific data on our subject and give an example.

Why should we exercise on irises? Let others practise on pussies.

Arguing about the merits of methods is a thankless task. Everyone has his own preferences. Personally I am choosing a method I proceed from a simple premise - a method should work with various input data (both numeric and nominal) without any preliminary transformations . Let me explain why. There are a large number of preprocessing methods for input data (I know more than 20). And depending on it we get different results. So we need to select the optimal set of input data, the optimal way to prepare these data and the optimal method that gives the best results by any criterion. And if we cannot get rid of the first and the last, we need to get rid of the second at least.

Regarding the question about Accuracy criterion - it's a ratio of correctly classified cases of a certain class to the total number of cases of the same class.

Good luck

 
vlad1949:


Personally, when choosing a method, I start from a simple premise - the method should work with different input data (both numeric and nominal) without any pre-transformation. Let me explain why. There are a large number of preprocessing methods for input data (I know more than 20). And depending on it we get different results. So we need to select the optimal set of input data, the optimal way to prepare these data and the optimal method that gives the best results by any criterion. And if we can't get away from the first and the last , at least we need to get rid of the second.

You're a "scary man" )))
for this approach yes...a random forest is fine...
good luck...
 
Vizard:
You are a "scary man" ))))
for this approach yes...random woods are fine...
good luck...


By any chance, don't you mean RandomForest?
Reason: