Discussion of article "How to use ONNX models in MQL5" - page 6

 
Error when using model-classifier.

Regression works with everything, the output is one number.

But when you ask any chat to write MLP-classifier, the Expert Advisor cannot recognise the output data of this model: "Buy", "Sell", "Hold". Either "1", "2", "3", or "0", "1", "2".

The error flies out

2025.02.12 08:13:46.866 Core 01 2021.01.01 00:00:00 Error setting output form: 5808
2025.02.12 08:13:46.866 Core 01 2021.01.01 00:00:00 ONNX: invalid handle passed to OnnxRelease function, inspect code 'X È$Zë3E' (291:7)

None of the chats, not even Dipsic, understands or knows how to fix the problem, generating possible codes that also lead to this error.

All chats say the same thing: since this is an MLP classifier, it has only 3 outputs, according to your labels (I feed it a csv file, where the last column is one of the three labels of a simple classification: buy, sell, hold. I tried string and numeric values in this column).

Then this block

.
const long output_shape[] = {1,1};
   if(!OnnxSetOutputShape(ExtHandle,0,output_shape))
     {
      Print("OnnxSetOutputShape error ",GetLastError());
      return(INIT_FAILED);
     }
It changes the initialisation of the array

.
const long output_shape[] = {1,3};
   if(!OnnxSetOutputShape(ExtHandle,0,output_shape))
     {
      Print("OnnxSetOutputShape error ",GetLastError());
      return(INIT_FAILED);
     }


And an error appears.

I'm trying to print.

Print(OnnxGetOutputCount(ExtHandle));

I get 2.

I don't understand anything.



If anyone understands what the error is, please let me know.

Python code for classifier - any, they all generate the same error.

For example, one of the implementations:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.neural_network import MLPClassifier
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType

# Loading data
file_path = '....csv'
data = pd.read_csv(file_path)

# Split into attributes (X) and labels (y)
X = data.iloc[:, :160].values  # The first 160 columns are input data
y = data.iloc[:, 160].values   # The last column is the target label

# Encoding string labels into numeric labels
label_encoder = LabelEncoder()
y_encoded = label_encoder.fit_transform(y)

# Data normalisation
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Split into training and test samples
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y_encoded, test_size=0.2, random_state=42)

# Creating an MLP classifier
mlp = MLPClassifier(hidden_layer_sizes=(128, 64), activation='relu', solver='adam', max_iter=500, random_state=42)
mlp.fit(X_train, y_train)

# Estimating the accuracy of the model
accuracy = mlp.score(X_test, y_test)
print(f"Accuracy: {accuracy * 100:.2 f}%")

# Saving the model in ONNX format
initial_type = [('float_input', FloatTensorType([None, 160]))]  # 160 input features
onnx_model = convert_sklearn(mlp, initial_types=initial_type)

# Saving the ONNX model
onnx_file_path = 'model.onnx'
with open(onnx_file_path, "wb") as f:
    f.write(onnx_model.SerializeToString())

print(f"Model saved as {onnx_file_path}")


That is, the model itself - running in python. It's calculating something

runfile('....py', wdir='...')
Accuracy: 54.16%
Model saved as model.onnx

But the advisor can't accept it.

 
In a healthy community code generation is not discussed or even considered kinda. It's not bad to know about the ONNX bug, because of which multiclass is not supported.
 
It doesn't need to be discussed
It's not the question.
 
Ivan Butko #:
It doesn't need to be discussed
It's not the question.

Try {2,3} or {3}.

ask the python script to output the correct dimension of the output.

but most likely just {1}, it returns a structure where the fields already correspond to outputs.


For example, I have for a binary classifier is

const long output_shape[] = {1};
   if(!OnnxSetOutputShape(ExtHandle, 0, output_shape))
     {
      Print("OnnxSetOutputShape 1 error ", GetLastError());
      return(INIT_FAILED);
     }
 

Then you just create a structure in the code

static vector out(1);

   struct output
     {
      long           label[];
      float          tensor[];
     };

   output out2[];
   
   OnnxRun(ExtHandle, ONNX_DEBUG_LOGS, f, out, out2);

   double sig = out2[0].tensor[1];

Where the label field is the class values and the tensor is the probabilities

 
Wrong: label contains class values and tensor contains probabilities. Well, the output dimension is essentially 2,2, but since the structure is returned, it should be set to 1
 
Maxim Dmitrievsky #:
Wrong: label contains class values and tensor contains probabilities. So the output dimension is essentially 2,2, but since the structure is returned, you should put 1
Thanks

UPD

On the essence of architectures: regression in the article deals with vangulation. And classification seems to make more sense. And lo and behold - it turns out to be a problem with it in native functionality.

Subjectively: if the target is marked as the next price (or other quantitative indication), the NS starts swaying from side to side.

And if the target is marked as buy-sell-hold, at least, the NS adjusts itself to the number of successful entries without paying attention to the size. And on the distance this "ignorance" is compensated by some levelling, as if noise therapy. Imho, of course, little tried the classification in another implementation. So I wanted to here
 
Ivan Butko #:
Thank you
You can also visualise your mesh via Netron and it will display the dimension and output type
 
Ivan Butko #:
Thank you

UPD

On the point about architectures: the regression in the article deals with vangulation. And classification seems to make more sense. And here we are - it turns out to be a problem with it in the native functionality.

Subjectively: if the target is marked as the next price (or other quantitative indication), the NS starts to sway from side to side.

And if the target is marked as buy-sell-hold, at least, the NS adjusts to the number of successful entries, without paying attention to the size. And on the distance this "ignorance" is compensated by some levelling, as if noise therapy. Imho, of course, little tried the classification in another implementation. So I wanted to here

That's what preprocessing, which you don't respect, is for :) to separate first the grains from the chaff, and then train it to predict the separated grains.

If preprocessing is good, the output is not quite rubbish either

 

Any chance you can fix this script to run with newer versions of python (3.10-3.12)?

I have a load of problems trying to get it to run on 3.9.

tx