Machine learning in trading: theory, models, practice and algo-trading - page 1273

 
Aleksey Vyazmikin:

I don't know how you think, but terms only distort the essence of thought, unless we are talking about axioms, long established and unverifiable.

You can't give an analogy, referring to terminology is unproductive.

All analogies will shatter against the wall of your fantasies. I gave you plenty of them yesterday, but you didn't understand a single one.

Stirlitz kept standing his ground, it was Mueller's favorite torture.

If it's nice to think that the agent is influencing the opponent on purpose, that it's some kind of separate special strategy, then think

and he just goes through the options, playing with opponents many times, and for each case highlights the optimal strategies (if he wins gets rewards), if he loses he gets penalized. As a result he has experience in the form of a well-worn NS which has taken into account a bunch of different game combinations, and can predict the outcome of each, so he acts according to the best predictions. If the opponent changes the stratum, the NS sees this and reacts with another stratum, activating other neurons. In the same way, when the market changes the NS makes different predictions.

 
Aleksey Vyazmikin:

I don't know how you think, but terms only distort the essence of thought, unless we are talking about axioms, long established and unverifiable.

You could not give an analogy, referring to the terminology - unproductive.

Alexey, you are honestly just spamming and it is clear for what, for all time that you wrote here a lot of letters, whether you wrote a single line of code that would support your words and conjectures?

I'm pretty sure I haven't.

It's sad that when you ask for some material to raise your awareness of a current issue, you easily dismiss it because it's in a language you're not familiar with, is it so hard to use translators?

Regarding the probabilities, just a neural network is trained on historical scenarios with 100% known outcome and after you apply the trained network, you get a response to the situation is not 100%, but probabilistic, then based on the logic that will be you, you (another network) will decide what to do, thus in the end you get a living network, the decisions and outcomes will not be possible to predict.

 
Maxim Dmitrievsky:

all analogies will shatter the wall of your fantasies. I gave you a lot of them yesterday, but you didn't understand any of them.

Stirlitz kept standing his ground, it was Mueller's favorite torture.

If it's nice to think that the agent is influencing the opponent on purpose, that it's some kind of separate special strategy, then think

and he just goes through the options, playing with opponents many times, and for each case highlights the optimal strategies (if he wins gets rewards), if he loses he gets penalized. As a result he has experience in the form of a well-worn NS which has taken into account a bunch of different game combinations, and can predict the outcome of each, so he acts according to the best predictions. If the opponent changes the stratum, the NS sees this and reacts with another stratum, activating other neurons. In the same way when the market changes the NS makes different forecasts.

You're writing it correctly, am I arguing about that? No, I'm arguing that the choice comes from those actions that can have an effect on the environment, opponent, or inaction. Yes, gets rewarded for influencing the opponent and thus changing the probability of the overall outcome of the event (game), while the trader does not have exactly that opportunity in action and it significantly affects the vector of action. Perhaps if we divide the action into three groups, we see that it is the impact on the opponent that contributes most to the overall outcome, i.e. it is the significant action that leads to the overall positive outcome that affects the outcome most.

If I haven't managed to get my point across again, I think I've tried enough, otherwise I'm already imposing, and I just wanted to share my observations.

 
Farkhat Guzairov:

Alexey, you are honestly only spamming and it is clear for what, for all the time that you have written here a lot of letters, have you written a single line of code, which could support your words and conjectures?

I'm pretty sure I haven't.

It's sad that when you ask for some material to raise your awareness of a current issue, you easily dismiss it because it's in a language you're not familiar with, how hard can it be to use translators?

Regarding the probabilities, just a neural network is trained on historical scenarios with 100% known outcome and after applying the trained network, you get a response to the situation is not 100%, but probabilistic, then based on the logic that will be you put in place, you (another network) will decide what to do, thus in the end you get a living network, the decisions and outcomes will not be possible to predict.

No, I did not write any code on the game StarCraft 2, could not even find how to make bots, ie intelligence there can be programmed by triggers, and direct intervention in the code, sort of prohibited, which is reported, but it is possible to access the memory used StarCraft (1) and there are a long time competitions. However, has anyone here posted code on the game?

Yes, I was given some material (Maxim is good in terms of working with foreign literature/information), but I asked for a definition of the term, and moreover, estimated the material as containing more information than I could find on the expanses of the Russian-speaking Internet.

I did not understand your message about the principles of MoD. What did you mean by that?

 
Aleksey Vyazmikin:

No, I haven't written any code on the game StarCraft 2, I couldn't even find how bots are made, i.e. the intelligence there can be programmed with triggers, and direct interference with the code is sort of forbidden, which is reported, but it is possible to access the memory used by StarCraft (1) and there are long competitions. However, has anyone here posted code on the game?

About StarCraft 2, no one is directly tampering with the game code, nor is there an API to retrieve data from the game, it's much simpler. Bots learn from graphic images, just screenshots, ie they get exactly the same amount of information as a person, but use it much more efficiently than the average statistical player.

 
Farkhat Guzairov:

About StarCraft 2, nobody interferes directly with the game code, and there's no API to retrieve data from the game, it's much simpler. Bots learn based on graphical images, simply screenshots, i.e. they get exactly the same amount of information as a human, but they manage it much more efficiently than an average statistical player.

I'm talking primarily about the application of the model. You can read about it here.

История соревнований ИИ по Starcraft
История соревнований ИИ по Starcraft
  • habr.com
Начиная с первого Starcraft AI Competition, проведённого в 2010 году, тема искусственного интеллекта в стратегиях реального времени (RTS) становится всё более популярной. Участники таких соревнований представляют своих ИИ-ботов Starcraft, которые сражаются в стандартной версии Starcraft: Broodwar. Эти соревнования по играм RTS, вдохновлённые...
 
Aleksey Vyazmikin:

I'm talking primarily about the application of the model. You can read about it here, for example.

In my post about the code, I was asking a rhetorical question, because so much has already been written in this thread, but very few people ended up being able to apply at least 5% of what has been discussed here. What did you personally did in the end?

About the applicable models, yes they are all similar (games, weather forecasts, markets, etc.), the question here is what you will input into the MO, not how it will be processed within the MO, people still don't understand how they will end up with what they received from the MO, the whole problem is this. It's easier with games, because there is a ready matrix of input data, but in all other areas it is necessary to invent own methods of preprocessing.

 
Farkhat Guzairov:

In my post about the code, I asked a rhetorical question, because so much has already been written in this thread, but very few people ended up being able to apply at least 5% of what has been discussed here. What did you personally did in the end?

About the applicable models, yes they are all similar (games, weather forecasts, markets, etc.), the question here is what you feed to the input of the MO, not how it will be processed within the MO, people still don't understand how they will end up with what they got from the MO, the whole problem is this. In this respect it is easier with games, because there is already a ready matrix of input data, but in all other areas it is necessary to invent their own methods of preprocessing.

Well, you yourself did not read the information with which you began to argue, and above you blamed me. Okay, all people are alike.

I again do not understand the question, about "you personally did something as a result" - unfold it, in what such result, and what I had to do personally? If you're talking about my application of MO, then yes, I'm working on this issue in a number of ways (model creation, selection, application) - I've written a lot here about my accomplishments.

That's the thing, not all models are similar, and the predictors, yes, are extremely significant. A complex system consists of different models, including different types (trees/NS), the same experts from Yandex say about this, for example.

 

By the way, note that the man was losing by making mistakes in actions (clicking crookedly/forgetting to activate a skill), but was able to win by using a non-standard tactical move - constantly distracting the opponent by landing in the rear of the opponent's base, which forced him to deploy troops to attack the human base, which gave the man time to develop his units to a higher level, resulting in him being able to cause significant damage to the opponent and win the match.

This is also how unexpected spikes and false proboys distract the trader from his objective.

 
Aleksey Vyazmikin:

By the way, note that the man was losing by making mistakes in actions (clicking crookedly/forgetting to activate a skill), but was able to win by using a non-standard tactical move - constantly distracting the opponent by landing in the rear of the opponent's base, which forced him to deploy troops to attack the human base, which gave the man time to develop his units to a higher level, resulting in him being able to cause significant damage to the opponent and win the match.

That's how unexpected spikes and false proboys distract the trader from his objective.

Well, by the next game, this non-standard behavior will already be beaten by the bot, it is clear that at the moment a person can surpass the AI at the expense of non-standard behavior, but as soon as the AI "says" "Why is it so possible to do....", the man will have a hard time.

Reason: