Neural network in the form of a script - page 2

 
YuraZ:
kombat:
Something about the logic of this script resembles a simple 4v2 encoder

the gizmo is interesting of course!


but what the net has been trained on is what it is guaranteed to show on!


Now try to feed her inputs with what she did not see during training!

She's going crazy!




test_pat[0] = 1 ;
test_pat[1] = 1 ;
test_pat[2] = 0 ;
test_pat[3] = 0 ;
test_the_network() ;
Print(MathRound( ol_a[0]), " 1100 ", MathRound(ol_a[1]) ) ;

She answers 1 0

although logically it should answer 1 1

---

The correct network will do exactly that - it will answer 1 1 even though it has not seen any such data during the learning process!



try to teach the network the following example


OUTPUT = INPUT

30.00 = 100.00
27.50 = 87.50
25.00 = 75.00
20.00 = 50.00
15.00 = 25.00
13.75 = 18.75
12.50 = 12.5
11.25 = 6.25
10.00 = 0.00

and then give it an input of, say, 62.5 should produce an output of 22.50.


MUST = INPUT

22.50 - 62.5

this is a simple example of scaling that networks handle like peanuts without a peel



In this algorithm there are 4 inputs and 2 outputs.



So you have to teach the network this way:

______ входы_______________выходы

1______2_______3______4 _____1______2

30.00_ 100.00_ 27.50_ 87.50_ 25.00_ 75.00


But before that you have to divide everything by 100 to get into the range of network 0 - 1 .

Outside this range the network does not learn . Well it doesn't respond as taught .


______ входы_______________выходы

1______2_______3______4 _____1______2

0,30___ 1,0___ 0,275_ 0,875___ 0,25___ 0,75


Example of data preparation :

 

A lyrical digression... ;)

Белл с 1873 г. пытался сконструировать гармонический телеграф, добиваясь возможности передавать по одному проводу одновременно семь телеграмм (по числу нот в октаве). Он использовал семь пар гибких металлических пластинок, подобных камертону, при этом каждая пара настраивалась на свою частоту. Во время опытов 2 июня 1875 г. свободный конец одной из пластинок на передающей стороне линии приварился к контакту. Помощник Белла механик Томас Ватсон, безуспешно пытаясь устранить неисправность, чертыхался, возможно, даже используя не совсем нормативную лексику.

Being in another room and manipulating the receiving plates, Bell with his sensitive trained ear picked up the sound coming through the wire. Spontaneously attached at both ends, the plate became a kind of flexible membrane and being over the pole of the magnet, changed its magnetic flux. As a consequence the electric current, coming into the line, varied according to air oscillations, caused by Watson's muttering, this was the moment of telephone's origin. For nine months Bell perfected his brainchild. He filed a patent application on February 14, 1876 and was granted it on March 7.

Three days later, on 10 March 1876, the 12-metre-long wire connecting Ball's flat with the laboratory in the attic transmitted the first articulate phrase that would become historic:
"Mr Watson, come here. I need you!"



- Wiki: CHIFRATOR (log. electr.)

- CHIFRATOR

- CHIFRATOR

 

I see . I think the network can work as a scrambler.

What you teach you, you get. You want an encryptor, you want a decryptor.


And an encoder is most likely something rigid, tailored to a specific case .

I would, in the simplest case, make an encoder for one set of data like this .

As many sets, as many pieces of code.


if ( inp1=10 && inp2=12 && inp3=23 && inp4= 100)
{
  outp1 = 0 ;
  outp2 = 0; 
}
 
sprite:

I see . I think the network can work as a scrambler.

What you teach you, you get. You want an encryptor, you want a decryptor.

Sest is learnable, that's its trick .


And the coder is either clear or fuzzy logic or something else . But most likely something rigid, tailored to the particular case .

I would, in the simplest case, make an encoder for a single dataset this way .

As many sets, as many pieces of code.


if ( inp1=10 && inp2=12 && inp3=23 && inp4= 100)
{
  outp1 = 0 ;
  outp2 = 0; 
}

But the grid has more possibilities, not only can it simulate the operation of the encoder in particular .

It can be retrained right "on the fly", say, in EA, after several days of trading.

In short, it is quite an interesting mathematical phenomenon :) and deserves attention.


Only the entry and exit values in this algorithm need to be driven into the network visibility area, i.e. 0 to 1 .

Or redo the code .

 
kombat:
There's something about the logic of this script that reminds me of a simple 4v2 encoder.

The encoder is a non-training system .

And the network is trained in this script. And the learning process is shown in dynamics from epoch to epoch on the screen.

You can see how the weights of neurons in each layer change, and how the grid gets more and more accurate as it is trained.

Above are three posts where the same algorithm has learned

to work with three different sets of data .

In the case of an encoder - three encoders would be required, for each data set .

 

I am not against it, but I am not yet in favour of using neural networks in trading...


NS is my understanding on the level of wave directors who stand at the drawing board with an eraser

and drawing the current market situation with a pencil in their hands ... :)))


Although perceptronists are cooler than wave shapers... probably... :)))

 
kombat:

Not against, but not yet in favour of using neural networks in trading...



Likewise :) !!!

But the algorithm is working and learning :) And then we'll see :)


The interest in networks is further fuelled by the victory of EAs with networks in the Championship.

Of course it's a different network out there . But the man did the work and got the result .

 
sprite:


1. First of all you have to normalise everything - both input and output, i.e. put it in the range 0 - 1

(or you need to rewrite the network code for the new range of data change)

2. This network has 4 inputs and 2 outputs .

Which numbers from these columns and to which input should be fed ?


According to the algorithm

the network puts each quadruple of the Input values 1 0 0 0

sets a pair of Output data 1 0


There can be several such datasets, e.g. 4 .

And you have to feed them according to the network algorithm

input _output


1 0 0 0_ 0 0

1 0 0 1_ 1 0

1 0 1 0_ 0 1

1 0 1 1_ 1 1


or as follows, if 4 inputs and 3 outputs


1 0 0 0_ 1 1 1

1 1 0 0_ 1 0 0

1 0 1 1_ 0 0 1

1 0 1 0_ 0 1 1

1 0 1 0_ 0 0 0

1 1 1 1_ 0 1 0

for this case change the code











It is not always necessary to normalize, who says that a network CAN and MUST only work with 0 and 1?


I can attach a simple grid with an example, (unfortunately there are no materials at hand right now) - I will attach it later

where a simple NN solves this problem without data preparation with normalization

unfortunately this is not the source


the example I gave, though! it's like it's already normalised

condition there are two ranges


1 0-100

2 10-30


it is simply necessary to find the ratio of position in one range - which is known to

in essence it is a scaling



in the example i have clearly stated it knowing the limits


0 = 10

...

25 = 15

..

50 = 20

...

75 = 25

...

100 = 30


the problem is simple and not even generally for a network, but a good network will easily find a solution

---



Your branch is interesting !

 
kombat:

I am not against it, but I am not yet in favour of using neural networks in trading...


NS is my understanding on the level of wave directors who stand at the drawing board with an eraser

and drawing the current market situation with a pencil in their hands ... :)))


Although perceptronists are cooler than wave shapers... probably... :)))

well, that's an interesting opinion


really shattered - it was shattered - by BETTER's victory at the 2007 championships

Reason: