Highly reliable transaction/signal copier (ideology discussion and development) - page 8

 

Why go to such lengths? There is a standard controller for working with TCP/IP, write a separate program for this case. In the terminal, the EA communicates with the program (within one computer, in any way you like)... and there is no need to reinvent the wheel - to listen to the ports, everyone has been listening for you for a long time.

 
sergeev:

That's the point. I'm trying to think comprehensively. Of course, initially you need to invest more in scalability. That is, the goal is to do as for 1000h. And it doesn't matter that only a few will use it later.

That's why I'm trying to choose now - either speed and micro-traffic with sockets. Or http and a lot of traffic on constant chasing of clients for a new portion of information.

I think the second option is better. For those who need scalability, and they are very few, in any case, have to incur additional costs. Let them pay for the corresponding traffic as well.

Others in 90% of cases will use it with small number of clients and reliability of connection and thus functionality in this case is more important than traffic.

And in the first case you can't get a good solution without a reliable connection.

 
sergeev:

That's the point. I'm trying to think comprehensively. Of course, initially you need to invest more scalability. That is, the goal is to do as much as for 1000h. And it doesn't matter that only a few will use it later.

That's why I'm trying to choose now - either speed and micro-traffic with sockets. Or http and a lot of traffic on constant chasing of clients for a new portion of information.

And what if the clients, that receive the information, become servers themselves and distribute it to some set of clients. Like Skype.

ZS then we have a scalable network, while it is small the orders come directly from the server, as soon as the network grows there are second echelon, third. In this case, the load on the server does not increase. The network can be configured by the ping between the machines.

 
Urain:
What if the clients who get the information themselves become servers and distribute it to some set of clients. Like in Skype.

Just watched the news)https://www.youtube.com/watch?feature=player_embedded&v=7VKf0W44qGA

With peer-to-peer it would be a "revolutionary" solution, unparalleled)

But one has to wonder if it's realistic and if it's even worthwhile.

 
OnGoing:

Just watched the news)https://www.youtube.com/watch?feature=player_embedded&v=7VKf0W44qGA

With peer-to-peer it would be a "revolutionary" solution, unparalleled)

One just has to wonder if it's realistic, and if it's even worthwhile.

That's why I asked about the size of the network. To grasp the immensity is like grasping the indescribable :)
 
Urain:

What if the clients who receive the information become servers themselves and distribute it to a certain set of clients. Like in Skype.

Option is good, but too big :) I think, that for synchronization of clients with the master, making an additional exchange between the clients themselves is redundant.
Although, of course, each client will become a mini-server to send the received information, and this, in principle, is worth thinking about.

 
Integer:

Why go to such lengths? There is a standard controller for working with TCP/IP, write a separate program for this case. In the terminal, the EA communicates with the program (within one computer, in any way you like)... and there is no need to reinvent the wheel - to listen to the ports, everyone has been listening for you for a long time.

Dmitry, I repeat. Replicators have long been available for about 4-5 years. and local and remote, and with intermediate servers. I do not need any controllers listening for me.

Here I want to make a forum-wide discussion between people who have learned a lot about it. And on the basis of the pros and cons of the technologies to make variants of reliable copying machines, which are stable and resistant to both the number of clients and the quality of connection and the load on the channels.
 
sergeev:

..

And on the basis of the advantages and disadvantages of the technologies to make reliable copiers that are stable and robust both in terms of the number of clients and the quality of connection and the load on the channels.

Those on the agenda are two risks.

1 not getting the signal because of communication problems

2 not getting the right message because of bit losses in the transmission.

Then without communication between neighboring clients, getting three signals from different sources, you can do a bit-wise reconciliation and get the true message on the principle of "2 of 3 is true". Such a scheme is more secure against both communication failures and transmission losses. Messages can then be encrypted into bit masks and compressed to a minimum (instead of transmitting string sentences). Which will reduce server traffic.

And in order to avoid failure due to a failed neighbour, form a redundant mailing, for example, the client receives a signal from the server and 4 neighbours, but the server signal and 2 signals from the neighbours that came first are taken into account.

 
Urain:

Those on the agenda are two risks.

1 not getting the signal because of communication faults

Lack of communication at the client cannot be solved. It is either present or absent. The server is supposed to have communication at all times.

2 not getting the correct message due to lost bits in transmission.

An invalid message can be signed with e.g. a hash. If the hash is wrong, the message is retried from the server. But usually a special @label@ tag at the end and in the middle of a file makes it clear that the message is complete.

 
Urain:

...then communication between neighbouring clients is essential, having received three signals from different sources it is possible to do a bit-wise collation and output a true message on a "2 out of 3 correct" basis. Such a scheme is more secure against both communication failures and transmission losses. Messages can then be encrypted into bit masks and compressed to a minimum (instead of transmitting string sentences). Which will reduce server traffic.

Verification of the truth of transmitted data is already implemented in TCP/IP at the protocol level.
Reason: