Discussion of article "Neural Networks Made Easy" - page 5

 

Interesting film that was made a few years earlier. It's called Find 7 Differences :-))))




 
Boris Egorov:
Please give an example ... it is not very clear how to use it
Boris Egorov:

you don't understand, I am familiar with neural networks

When there is no logic, I don't understand.

If you are familiar, you know how to use it.

But if it is just a complaint about the article in this form, then I agree.

 
Stanislav Korotky:

When there's no logic, I don't get it.

If you're familiar with it, you know how to use it.

But if it's just a complaint about the article in this form, then I agree.

Yes exactly to the article, despite the fact that the article is really cool, it's just that not everyone knows about it and I personally am too lazy to just take it and try it - it will eat up so much time and without guaranteeing the result and with the obligatory stepping on childish rakes, that's why I want to know "how it works" - in the sense of trading results at least on a simple model and the speed of training this model on specific equipment, I understand that if the author has written this titanic work, he certainly has at least some calculated models and it would be possible to draw at least some results.

 
Stanislav Korotky:

If you're familiar with it, you know how to use it.

That's not a sure thing. ..... There's a lot of peculiarities.

 
Boris Egorov:

and that's not a fact .... There are still some peculiarities here.

I gave some links (and you can still find them) with implementations of the same NS with examples of use. This is not the first article on the topic, there were more detailed ones.

 
Stanislav Korotky:

I gave some links (and you can still find them) with implementations of the same NS with examples of use. This is not the first article on the topic, there were more detailed ones.

I don't want to repeat myself, can you tell me if there is any connection with the principle "5 why" ?

And the answer from the author is probably not soon to wait until he spends all the fee and returns to the forum :)

 

Author, please check the method bool CLayer::CreateElement(const uint index).

After the first addition of an element (neuron) you have m_data_total=0. It is not in order.

You should at least compare the above method with a similar one: bool CArrayObj::Add(CObject *element).

Do you have any conscience to post such unverified code or what?

I didn't want to say it, but I can't anymore.

The author has ported an example from the video tutorial I gave a link to here. There is nothing wrong with it, but at least it would be worthwhile to write about the source of the base code.

Forum on trading, automated trading systems and testing trading strategies.

Discussion of the article "Neural Networks - it's easy".

Boris Egorov, 2020.01.27 04:56 pm

Yes exactly to the article, despite the fact that the article is really cool, just not everyone understands it and here I am personally lazy to just take and try - it will eat up so much time and without guaranteeing the result and with the obligatory stepping on baby rakes, so I want to know "how it works" - in the sense of the results of trading at least on a simple model and the speed of training this model on specific equipment, I understand if the author wrote this titanic work, he certainly has at least some calculated models and it would be possible to throw at least some resu

There's an example in the video tutorial. Just the author of the article decided not to bother. Here are the sources.
 

If you look at the original, it has this method:

void Net::feedForward(const vector<double> &inputVals)
{
    assert(inputVals.size() == m_layers[0].size() - 1);

    // Assign (latch) the input values into the input neurons
    for (unsigned i = 0; i < inputVals.size(); ++i) {
        m_layers[0][i].setOutputVal(inputVals[i]);
    }

    // forward propagate
    for (unsigned layerNum = 1; layerNum < m_layers.size(); ++layerNum) {
        Layer &prevLayer = m_layers[layerNum - 1];
        for (unsigned n = 0; n < m_layers[layerNum].size() - 1; ++n) {
            m_layers[layerNum][n].feedForward(prevLayer);
        }
    }
}

analogue in the article:

void CNet::feedForward(const CArrayDouble *inputVals)
  {
   if(CheckPointer(inputVals)==POINTER_INVALID)
      return;
//---
   CLayer *Layer=layers.At(0);
   if(CheckPointer(Layer)==POINTER_INVALID)
     {
      return;
     }
   int total=inputVals.Total();
   if(total!=Layer.Total()-1)
      return;
//---
   for(int i=0; i<total && !IsStopped(); i++) 
     {
      CNeuron *neuron=Layer.At(i);
      neuron.setOutputVal(inputVals.At(i));
     }
//---
   total=layers.Total();
   for(int layerNum=1; layerNum<total && !IsStopped(); layerNum++) 
     {
      CArrayObj *prevLayer = layers.At(layerNum - 1);
      CArrayObj *currLayer = layers.At(layerNum);
      int t=currLayer.Total()-1;
      for(int n=0; n<t && !IsStopped(); n++) 
        {
         CNeuron *neuron=currLayer.At(n);
         neuron.feedForward(prevLayer);
        }
     }
  }

Line highlighted in yellow. And in this form, the method does not work. Because in the source it adds another bias neuron (bias neuron).

Net::Net(const vector<unsigned> &topology)
{
    unsigned numLayers = topology.size();
    for (unsigned layerNum = 0; layerNum < numLayers; ++layerNum) {
        m_layers.push_back(Layer());
        unsigned numOutputs = layerNum == topology.size() - 1 ? 0 : topology[layerNum + 1];

        // We have a new layer, now fill it with neurons, and
        // add a bias neuron in each layer.
        for (unsigned neuronNum = 0; neuronNum <=  topology[layerNum]; ++neuronNum) {
            m_layers.back().push_back(Neuron(numOutputs, neuronNum));
            cout << "Made a Neuron!" << endl;
        }

        // Force the bias node's output to 1.0 (it was the last neuron pushed in this layer):
        m_layers.back().back().setOutputVal(1.0);
    }
}
 
Aleksey Mavrin:

Watching the whole thing I don't want to repeat myself, can you tell me if there is any connection to the "5 why's" principle there ?

No. And don't, because that's from a different opera.

 
Denis Kirichenko:

Author, please check the bool CLayer::CreateElement(const uint index) method.

After the first addition of an element (neuron) you have m_data_total=0. It's not right.

You should at least compare the above method with a similar one: bool CArrayObj::Add(CObject *element)...

The same nonsense with the CArrayCon::CreateElement(const int index) method. After the first element addition m_data_total=0.