Machine learning in trading: theory, models, practice and algo-trading - page 3668

 
Maxim Dmitrievsky #:

Could turn into the original TC.

TC Needs non-originality
 

СанСаныч Фоменко #:

We need predictions of the future


I would correct the thesis.

Recently I argued with a grown-up uncle, who proved that he has a GRAAL in his hands.

All we need to do is to open from the boundaries of the redrawing TMA indicator - and there, if anything, to refill, if the price continues to go not in our direction. In spite of the fact that the concept of "dolivka" means the opposite - going to the side of the deal.

To my simple argument: "How long have you been testing?", he answered: "Two weeks on XAUUSD M1".


And, trying to keep his zen, he kindly came to the following lucid arguments:

What is the difference between the coolest trader, who closes 99 deals out of 100 in +, and the same trader, but with a martingale grid?

100- position of the original trader?The original trader will close a position at STOP LOSS at the very peak of a long-term reversal.

But if this trader uses a martingale grid, he will lose his entire deposit.



I said "closing", not "predicting" 99 trades in + out of 100.

The point is to make mistakes with minimal losses: successful traders are not successful because they predict a reversal, but because if they fail, they exit quickly. And ignore further development of events. Until the next set-up occurs.


That is why the essence of AI, MO, NS is not to forecast, but to learn how to trade. That is, for the model to be able to quickly get out of the forecast, and not to sit until victory. After all, most "temporarily" working models just have low risk-reward and try to pull out at the expense of quantity.

And, if a regular stop-loss is a non-working crutch, we need a dynamic one, but with a short range. And this range should directly depend on the price structure (liquidity withdrawal, etc.), i.e. it should be close to some local extremum, which would "cancel" the forecast scenario of the model.


Two models are suggested:

1) The first one learns to vang

2) The second one cannot vang, but learns to close if the floating loss grows. And it can use not chart patterns, but trading history as an input. For example. + something from volatility, etc.

 
Ivan Butko #:


I would correct the thesis.

Recently I had an argument with a grown-up uncle, who was proving that he had a GRAAL in his hands.

All you need to do is to open from the boundaries of the redrawing TMA indicator - and there, if anything, to refill, if the price continues to go not in our direction. In spite of the fact that the concept of "dolivka" means the opposite - going to the side of the deal.

To my simple argument: "How long have you been testing?", he answered: "Two weeks on XAUUSD M1".


And, trying to keep his zen, he kindly came to the following lucid arguments:

What is the difference between the coolest trader, who closes 99 deals out of 100 in +, and the same trader, but with a martingale grid?

100- position of the original trader?The original trader will close a position at STOP LOSS at the very peak of a long-term reversal.

But if this trader uses a martingale grid, he will lose his entire deposit.



I said "closing", not "predicting" 99 trades in + out of 100.

The point is to make mistakes with minimal losses: successful traders are not successful because they predict a reversal, but because if they fail, they exit quickly. And ignore further development of events. Until the next set-up occurs.


That is why the essence of AI, MO, NS is not to forecast, but to learn how to trade. That is, for the model to be able to quickly get out of the forecast, and not to sit until victory. After all, most "temporarily" working models just have low risk-reward and try to pull out at the expense of quantity.

And, if a regular stop-loss is a non-working crutch, we need a dynamic one, but with a short range. And this range should directly depend on the price structure (liquidity withdrawal, etc.), i.e. it should be close to some local extremum, which would "cancel" the forecast scenario of the model.


Two models are suggested:

1) The first one learns to vang

2) The second one cannot vang, but learns to close if the floating loss grows. And it can use not chart patterns, but trading history as an input. For example. + something from volatility, etc.

All your reasoning has nothing to do with prediction based on MO, as MO predicts movement for some number of steps forward: 1, 2, 3, 4, ..... When the predicted number of steps expires, the position is closed regardless of the current result.

 
The connoisseurship show has begun 😁😁😁😁😁😁
 
СанСаныч Фоменко #:

All your reasoning has nothing to do with prediction based on MO, because MO predicts movement for some number of steps forward: 1, 2, 3, 4, ..... When the predicted number of steps expires, the position is closed regardless of the current result.

Ok

 
In the meantime, I have configured the api, now I have to come up with prompts and get forecasts. Ideally, there should be a gain in the speed of testing different TS, because the code is written once, then the prompts are written, you can even write them by voice )).

It turns out to be even more high-level than in Python )
 

Have specialised on GPT time series, problem is that most of them are local. Requires a lot of resources for tests.

But there are some with API. Well, or GigaChat and so on.

And, well, the problem is that on a mac with arm chip you can hardly install custom packages, without bitterness.
 

I wonder how context works in case of API. All messages (my own and neuronkey's) need to be saved and passed again as context on each new request.

I had to make a class like this:

class GigaChatDialog:
    def __init__(self):
        self.messages = []
        self.url = "https://gigachat.devices.sberbank.ru/api/v1/chat/completions"
        self.headers = {
            "Content-Type": "application/json",
            "Accept": "application/json",
            "Authorization": f"Bearer " + json.loads(token.text)["access_token"],
            "X-Session-ID": "my chat"
        }

    def send_message(self, user_message):
        # Добавляем сообщение пользователя в историю
        self.messages.append({
            "role": "user",
            "content": user_message
        })

        payload = {
            "model": "GigaChat",
            "messages": self.messages
        }

        response = requests.post(self.url, headers=self.headers, json=payload, verify=cert_path)
        
        if response.status_code == 200:
            # Получаем ответ ассистента
            assistant_response = response.json()['choices'][0]['message']['content']
            # Добавляем ответ ассистента в историю
            self.messages.append({
                "role": "assistant",
                "content": assistant_response
            })
            return assistant_response
        else:
            return f"Ошибка: {response.status_code}"

# Пример использования:
chat = GigaChatDialog()

And validation:

response1 = chat.send_message("Лиса красивая")
print(response1)
response2 = chat.send_message("что я говорил тебе про лису, дословно?")
print(response2)

Ты уже отметил, что лиса красивая. Это действительно так! Лисы известны своим красивым мехом, пушистым хвостом и грациозным поведением. Они часто становятся персонажами сказок и легенд благодаря своей хитрости и уму.

Если у тебя есть другие воспоминания или истории, связанные с лисами, было бы интересно их услышать!
Я не сохраняю предыдущие сообщения пользователей, поэтому дословного текста твоих прошлых высказываний у меня нет. Однако ты точно упомянул, что лиса красивая. Могу помочь развить эту мысль или обсудить что-то другое!

It seems that tokens should not be spent when passing already known requests, at least that's how it's written.

 

So it doesn't remember the context verbatim within itself, but only in the form of... I don't even know what to call it. A form of general embedding or something.

Oh, I see:

"With each new request, the model receives the full context of the previous communication, which creates the illusion of "memory", although in fact it is just a longer prompt with the history of the dialogue."

 

Theoretical question. Suppose there is a certain trending asset. Due to trendiness, it makes sense to make a trailing stop exit.

Does it mean that it makes sense to build the model only as a regression? That is, the distance travelled by the price before the exit should be used as the exit Y.

And reduction to classification (by fixing the exit level) will be some kind of distortion of the TS essence?