Python in algorithmic trading - page 22

 

Data Visualization with matplotlib in 1 Hour



Data Visualization with matplotlib in 1 Hour

In this video, the instructor introduces the importance of data visualization in machine learning and explains how it can help make sense of large amounts of collected data. The two primary Python libraries for data visualization, Matplotlib and Seaborn, are covered in the course.

The instructor states that the course is primarily designed for machine learning engineers, data engineers, and data scientists who want to learn Python. To illustrate the usage of Matplotlib, a simple example of plotting a curve is provided. Matplotlib's philosophy revolves around leveraging the existing language, Python, which has become the standard for building machine learning models and handling data. By combining Matplotlib with other Python packages, users can take advantage of the numerous packages available for various tasks.

The video emphasizes the importance of using the NumPy package alongside Matplotlib for scientific computing. While Matplotlib can work without NumPy, utilizing NumPy can significantly save time and effort. NumPy provides a powerful multi-dimensional array object and functions for manipulating it. An example is demonstrated in the video, where a curve with 100 points is generated using NumPy to compute the x and y coordinates. This approach proves to be much faster than performing the operation using pure Python. Additionally, the video covers plotting two curves on the same graph for comparison, plotting data from a file by extracting and organizing the data using Python code, and plotting points instead of items in a linear fashion.

The tutorial delves into creating different types of bar charts using the Matplotlib library. The dedicated function for creating bar charts, "bar," is introduced, which takes the x coordinate for each bar and the height of each bar as input parameters. By adjusting optional parameters, users can create various effects and even generate horizontal bars using the "barh" function. The tutorial also covers plotting multiple bar charts on the same graph and creating stacked bars using a special parameter in the "bar" function. Furthermore, the video briefly touches on creating pie charts using the "pie" function.

Various functions used in data visualization with Matplotlib are explained in the tutorial. The first function covered is histograms, which are graphical representations of probability distributions. The "hist" function and its parameters are discussed, allowing users to easily plot data as histograms. The second function covered is box plots, which facilitate the comparison of value distributions. The video explains the components of a box plot, including quartiles, median, mean, and statistical quantities of a dataset, and demonstrates how to generate them using the "boxplot" function. Finally, the tutorial covers altering plots by using different colors and styles, such as defining colors using triplets, quadruplets, or HTML color names, as well as setting the color of a curve plot.

The video continues by explaining how to add color to scatter plots, bar charts, and pie charts using the "color" parameter. This parameter enables users to control individual dot colors or change the common color for all dots. The video also touches on importing libraries as modules, using aliases for easier coding, and clarifying the representation of variables. It is emphasized that almost everything in Matplotlib and Python involves functions, such as the "pi" function and the "show" function.

Next, the tutorial covers custom color schemes and line patterns when creating box plots, markers, and line shapes. It demonstrates creating custom markers using predefined shapes and defining custom markers using math text symbols. Additionally, it explains how to easily change Matplotlib's default settings using the centralized configuration object, enabling users to adapt the visual style, such as having a black background and white annotations, to different contexts of usage.

The presenter explains how to save a graph to a file using the "savefig" function in Matplotlib. They also cover adding annotations to a graph, including a title, labels for the x and y axes, a bounded box, and arrows. The video demonstrates the process of adding these annotations to enhance the visual clarity and understanding of the graph. Furthermore, it showcases how to manually control tick spacing in Matplotlib for precise adjustments. The video highlights the various functions available in Matplotlib for annotating graphs and making them more self-explanatory for readers.

Moving on, the instructor discusses data visualization with Matplotlib and introduces Seaborn, a high-level interface to Matplotlib. Seaborn provides different parameters and functionalities compared to Matplotlib. The instructor showcases how to create visualizations using Seaborn's built-in dataset and color maps. The video concludes by presenting examples of creating a factor plot and utilizing color maps to plot data. Through these examples, viewers gain insights into using different functions and tools in Matplotlib and Seaborn to enhance their data visualization skills.

The video explains how to scale plots using Seaborn's "set_context" function. This function allows users to control plot elements, such as size, based on the context in which the plot will be displayed. It then clarifies the distinction between Seaborn's two types of functions: axes level functions and figure level functions. Axes level functions operate on the axis level and return the axes object, while figure level functions create plots that include axes organized in a meaningful way. Finally, the video provides guidance on setting axes for a box plot using the Matplotlib axis subplots object.

This comprehensive video tutorial covers a wide range of topics related to data visualization with Matplotlib and Seaborn. It starts by introducing the importance of data visualization in machine learning and the use of Matplotlib as a powerful library. It demonstrates how to plot curves, create bar charts, generate histograms and box plots, and customize colors, markers, and line styles. The tutorial also covers saving graphs, adding annotations, and manipulating tick spacing. Additionally, it introduces Seaborn as an alternative visualization tool with its own set of features and functionalities. By following this tutorial, viewers can enhance their data visualization skills and effectively communicate their findings using these powerful Python libraries.

  • 00:00:00 The instructor introduces data visualization and its importance in machine learning. He emphasizes the need to make sense of the large amounts of data collected globally and how data visualization can be helpful in achieving this. The two most widely used libraries for data visualization in Python, Matplotlib and Seaborn, are covered in this course. The instructor explains that this course is primarily aimed at machine learning engineers, data engineers, and data scientists who want to learn Python, and provides a simple example of plotting a curve using Matplotlib. The philosophy behind Matplotlib is to take advantage of the existing language, Python, which has become the gold standard for building machine learning models and wrangling data. Combining Matplotlib with other packages involves using Python code as there are numerous packages for Python for various tasks.

  • 00:05:00 The video explains the importance of using the NumPy package with Matplotlib for scientific computing. Although Matplotlib can work without NumPy, using NumPy can save a lot of time and effort by providing a powerful multi-dimensional array object and functions to manipulate it. The video includes an example of generating a curve with 100 points using NumPy to compute the x and y coordinates, which is much faster than performing the operation using pure Python. Additionally, the video covers plotting two curves on one graph for comparison, plotting data from a file using Python code to extract and organize the data, and plotting points instead of items in a linear fashion.

  • 00:10:00 The video tutorial covers how to create different types of bar charts using the Matplotlib library. The bar function is the dedicated function for creating bar charts and takes the x coordinate for each bar and the height of each bar as input parameters. By adjusting optional parameters, we can create different effects and even create horizontal bars using the bar h function. The tutorial also covers how to plot multiple bar charts on the same chart and how to plot stacked bars using a special parameter in the bar function. Finally, the tutorial touches on creating pie charts using the pie function.

  • 00:15:00 The video tutorial covers various functions used in data visualization with matplotlib. The first function covered is histograms, a graphical representation of a probability distribution. To generate histograms, the video explains the hist function and its parameters that allows users to plot the data easily. The second function covered is box plots, which allow easy comparison of distributions of values. The video explains the anatomy of a box plot, including quartiles, median, mean and statistical quantities of a data set, and how to generate them using the box plot function. Finally, the tutorial covers altering plots using different colors and styles, including defining colors using triplets, quadruplets, html color names, and setting the color of a curve plot.

  • 00:20:00 The video covers how to add color to scatter plots, bar charts, and pie charts using the "color" parameter to control individual dot colors or to change the common color for all dots. The video also discusses how to import libraries as modules, use aliases for easier coding, and what variables represent. Additionally, viewers are reminded that almost everything in matplotlib and Python involves functions, such as the "pi" function and the "show" function.

  • 00:25:00 We learn about custom color schemes and line patterns when creating box plots, markers and line shapes. We also learn how to create our own markers using predefined shapes as well as how to define our own markers using math text symbols. Additionally, we learn how to change Matplotlib's default settings easily using the centralized configuration object, making it possible to have a black background and white annotations for different usage contexts.

  • 00:30:00 The presenter explains how to save a graph to a file using the "savefig" function in Matplotlib. They also cover how to add annotations to a graph, including a title, labels for the x and y axes, a bounded box, and arrows. Additionally, they demonstrate how to control tick spacing manually in Matplotlib. The video highlights the various functions available in Matplotlib for annotating graphs and making them more self-explanatory for readers.

  • 00:35:00 The instructor discusses data visualization with matplotlib and shows how to add axis locators and grids to charts. The video then moves on to introducing seaborn, a high-level interface to matplotlib. The instructor explains how seaborn provides different parameters than matplotlib and shows how to create visualizations using the built-in dataset and Seaborn's color maps. The video concludes with examples of creating a factor plot and using color maps to plot data. Overall, viewers can learn how to use various functions and tools in matplotlib and seaborn to enhance their data visualization skills.

  • 00:40:00 The video explains how to scale plots using Seaborn's set context function. The set context function allows users to control plot elements, such as size, depending on the context the plot will be displayed in. The video then explains the difference between Seaborn's two types of functions: axes level functions and figure level functions. Axes level functions operate on the axis level and return the axes object, while figure level functions create plots that include axes that are organized in a meaningful way. Finally, the video goes over how to set axes for a box plot using the matplotlib axis subplots object.
Data Visualization with matplotlib in 1 Hour
Data Visualization with matplotlib in 1 Hour
  • 2022.04.29
  • www.youtube.com
In this FREE course let's learn the basics of creating data visualizations with the grandfather of visualization libraries in Python, matplotlib.LogikBot - A...
 

Deep Learning with Python, TensorFlow, and Keras tutorial



Deep Learning with Python, TensorFlow, and Keras tutorial

Greetings everyone, and welcome to a highly anticipated update on deep learning and Python with TensorFlow, as well as a new Chaos tutorial. It has been over two years since I last covered basic deep learning in Python, and during this time, there have been significant advancements. Getting into deep learning and working with deep learning models has become much simpler and more accessible.

If you're interested in delving into the lower-level TensorFlow code and intricate details, you can still refer to the older video. However, if you're aiming to get started with deep learning, you no longer need to go through that because we now have user-friendly high-level APIs like Chaos that sit on top of TensorFlow. These APIs make deep learning incredibly straightforward, enabling anyone, even without prior knowledge of deep learning, to follow along.

In this tutorial, we will take a quick run-through of neural networks. To begin, let's understand the core components of a neural network. The primary objective of any machine learning model, including neural networks, is to map inputs to outputs. For example, given inputs X1, X2, and X3, we aim to determine whether the output corresponds to a dog or a cat. In this case, the output layer consists of two neurons representing the possibility of being a dog or a cat.

To achieve this mapping, we can employ a single hidden layer, where each input, X1, X2, and X3, is connected to the neurons in the hidden layer. Each of these connections has a unique weight associated with it. However, if we limit ourselves to a single hidden layer, the relationships between the inputs and the output would be linear. To capture nonlinear relationships, which are common in complex problems, we need two or more hidden layers. A neural network with two or more hidden layers is often referred to as a deep neural network.

Let's add another hidden layer, fully connecting it with the previous layer. Each connection between layers has its own unique weight. Ultimately, the output is derived from the final layer, where each connection to the output layer possesses a unique weight. On an individual neuron level, the neuron receives inputs, which could be either the input layer values (X1, X2, X3) or inputs from other neurons. These inputs are summed, considering their associated weights. Additionally, an activation function is applied to simulate the neuron firing or not. Common activation functions include the step function or the sigmoid function, which returns values between 0 and 1. In our neural network, the output layer utilizes a sigmoid activation function, assigning probabilities to each class (dog or cat). The Arg max function is then used to determine the predicted class based on the highest probability.

Now that we have a basic understanding of neural networks, let's proceed to build one using TensorFlow. Firstly, ensure you have TensorFlow installed by running the command "pip install --upgrade tensorflow." You can import TensorFlow as "tf" and check the current version using "tf.version." For this tutorial, Python 3.6 or higher is recommended, although TensorFlow is expected to support Python 3.7 and later versions in the future.

Next, we'll import a dataset to work with. We'll utilize the MNIST dataset, which consists of 28x28 images of handwritten digits ranging from 0 to 9. These images will be fed into the neural network, and the network will predict the corresponding digit. We'll split the dataset into training and testing variables: X_train, Y_train, X_test, and Y_test.

To ensure better performance, we'll normalize the data. The pixel values of the images currently range from 0 to 255, so we'll scale them between 0 and 1 using the TF.keras.utils.normalize function.

To build the model, we'll use the high-level API Chaos, which simplifies the process of creating and training neural networks in TensorFlow. Chaos provides a sequential model called Sequential that allows us to stack layers one after another.

Here's an example of how you can create a neural network model using Chaos:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense

# Create a sequential model
model = Sequential()

# Add a flatten layer to convert the input into a 1D array
model.add(Flatten(input_shape=(28, 28)))

# Add a dense layer with 128 neurons and ReLU activation
model.add(Dense(128, activation='relu'))

# Add another dense layer with 10 neurons for the output and softmax activation
model.add(Dense(10, activation='softmax'))

# Compile the model
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])


In the above code, we import the necessary modules from TensorFlow and Chaos. We create a Sequential model and add layers to it using the add method. The first layer is a Flatten layer that converts the 2D input (28x28 images) into a 1D array. We then add a dense layer with 128 neurons and ReLU activation. Finally, we add an output layer with 10 neurons (corresponding to the 10 digits) and softmax activation.

After defining the model, we compile it using the compile method. We specify the optimizer (in this case, 'adam'), the loss function ('sparse_categorical_crossentropy' for multi-class classification), and the metrics to evaluate during training.

Now that we have our model defined and compiled, we can proceed to train it on the MNIST dataset. We'll use the fit method to train the model.

# Train the model model.fit(X_train, Y_train, epochs=10, validation_data=(X_test, Y_test))

In the above code, we pass the training data (X_train and Y_train) to the fit method along with the number of epochs to train for. We also provide the validation data (X_test and Y_test) to evaluate the model's performance on unseen data during training.

After training the model, we can make predictions using the predict method:

# Make predictions predictions = model.predict(X_test)

In the above code, we pass the test data (X_test) to the predict method, and it returns the predicted probabilities for each class.

That's a brief overview of building and training a neural network using Chaos in TensorFlow. You can further explore different layers, activation functions, optimizers, and other parameters to customize your model.

additional techniques and concepts related to building and training neural networks.

  1. Regularization Techniques:

    • Dropout: Dropout is a regularization technique used to prevent overfitting. It randomly sets a fraction of input units to 0 at each update during training, which helps prevent the model from relying too heavily on any particular set of features.

    • L1 and L2 Regularization: L1 and L2 regularization are techniques used to add a penalty to the loss function to prevent large weights in the network. L1 regularization adds the absolute value of the weights to the loss function, encouraging sparsity, while L2 regularization adds the squared weights to the loss function, encouraging small weights.

  2. Advanced Activation Functions:

    • Leaky ReLU: Leaky ReLU is an activation function that solves the "dying ReLU" problem by allowing a small slope for negative inputs. It introduces a small negative slope when the input is negative, which helps prevent neurons from dying during training.

    • Exponential Linear Unit (ELU): ELU is an activation function that smooths the output for negative inputs, allowing the activation to take on negative values. It has been shown to help improve the learning of neural networks and reduce the bias towards positive values.

    • Swish: Swish is an activation function that performs a smooth interpolation between the linear and sigmoid functions. It has been shown to provide better results compared to other activation functions like ReLU and sigmoid in certain cases.

  3. Transfer Learning: Transfer learning is a technique that leverages pre-trained models to solve new tasks or improve the performance of a model on a related task. Instead of training a model from scratch, you can use a pre-trained model as a starting point and fine-tune it on your specific task or dataset. This is particularly useful when you have limited data for your specific task.

  4. Hyperparameter Tuning: Hyperparameters are parameters that are not learned by the model but affect the learning process, such as learning rate, batch size, number of layers, etc. Tuning these hyperparameters can significantly impact the performance of the model. Techniques like grid search, random search, and Bayesian optimization can be used to systematically search the hyperparameter space and find the best combination.

  5. Model Evaluation: Evaluating the performance of a model is crucial to assess its effectiveness. Common evaluation metrics for classification tasks include accuracy, precision, recall, F1 score, and area under the receiver operating characteristic curve (ROC AUC). It's important to choose the appropriate metrics based on the problem at hand and the nature of the data.

  6. Handling Imbalanced Datasets: Imbalanced datasets occur when the distribution of classes is not equal, which can lead to biased models. Techniques such as oversampling the minority class, undersampling the majority class, or using a combination of both can help address this issue. Additionally, utilizing evaluation metrics like precision, recall, and F1 score can provide a better understanding of the model's performance on imbalanced datasets.

Remember, building and training neural networks is an iterative process. It involves experimentation, fine-tuning, and continuous improvement to achieve the desired results.

Deep Learning with Python, TensorFlow, and Keras tutorial
Deep Learning with Python, TensorFlow, and Keras tutorial
  • 2018.08.11
  • www.youtube.com
An updated deep learning introduction using Python, TensorFlow, and Keras.Text-tutorial and notes: https://pythonprogramming.net/introduction-deep-learning-p...
 

Loading in your own data - Deep Learning basics with Python, TensorFlow and Keras p.2



Loading in your own data - Deep Learning basics with Python, TensorFlow and Keras p.2

Welcome, everyone, to part 2 of our deep learning tutorial on Python TensorFlow in Karros. In this tutorial, we will focus on loading an external dataset. Specifically, we will be using the cats and dogs dataset from Microsoft, originally a Kaggle challenge. The goal is to train a neural network to identify whether an image contains a cat or a dog.

To begin, please download the cats and dogs dataset from Microsoft. Once you have downloaded and extracted the dataset, you should see two directories: "cat" and "dog." These directories contain images of cats and dogs, respectively. Each directory should have a substantial number of samples, around 12,500, providing ample examples for training our model.

Now let's move on to the coding part. We need to import several libraries: numpy as np, matplotlib.pyplot as plt, and OpenCV as cv2. If you don't have these libraries installed, you can use pip to install them.

Next, we will define the data directory where our dataset is located. You can specify the path to your dataset accordingly. We will also define the categories as "dog" and "cat" to match the directories in our dataset.

We will iterate through each category and the corresponding images using the OS library. For each image, we will convert it to grayscale using the cv2 library. We chose grayscale because we believe color is not crucial for differentiating between cats and dogs in this specific task.

To visualize the images, we will use matplotlib.pyplot. We will display an example image using plt.imshow and the grayscale color map. This step allows us to confirm that the images are loaded correctly.

After verifying the images, we will proceed to resize them to a uniform shape. We need to decide on a target size, such as 50x50 pixels, to ensure consistency. We will resize the images using the cv2.resize function and store the resized image arrays.

Now, we will create the training dataset. We initialize an empty list called "training_data" and define a function called "create_training_data." Within this function, we iterate through the images and assign numerical labels (0 for dogs, 1 for cats) using the index of the category in the "categories" list.

For each image, we resize it to the chosen target size. We append the resized image array and its corresponding label to the training_data list. We also handle any potential exceptions related to broken images in the dataset.

Once we have created the training dataset, we should check the balance of the data. In a binary classification task like this, it is essential to have an equal number of samples for each class (50% dogs and 50% cats). Imbalanced data can lead to biased model predictions. If your data is imbalanced, you can use class weights during training to mitigate this issue.

To ensure randomness and prevent the model from learning the order of the images, we shuffle the training data using the random.shuffle function.

Now that our data is shuffled, we can pack it into variables for features (X) and labels (Y). We initialize empty lists for X and Y and iterate through the training data, appending the features and labels to the respective lists. Finally, we convert X to a NumPy array and reshape it using np.array and the shape of each feature.

At this point, we have prepared our data for training the neural network. We are now ready to proceed with further steps, such as splitting the data into training and validation sets, building the model, and training it using TensorFlow.

Negative 1 is a placeholder that automatically calculates the size based on the length of the array and the shape of each element. So in this case, we're reshaping the X array to have a shape of (-1, image_size, image_size). This ensures that the data is in the correct format to be fed into the neural network.

Next, we need to normalize the pixel values of the images. Currently, the pixel values range from 0 to 255, representing the intensity of the grayscale. Neural networks generally perform better when the input data is normalized, meaning the values are scaled to a smaller range. We can achieve this by dividing the pixel values by 255.0, which will scale them between 0 and 1.0. This can be done using the following code:

x = x / 255.0

Now that we have our preprocessed data, we can move on to building our deep learning model using TensorFlow.
Loading in your own data - Deep Learning basics with Python, TensorFlow and Keras p.2
Loading in your own data - Deep Learning basics with Python, TensorFlow and Keras p.2
  • 2018.08.18
  • www.youtube.com
Welcome to a tutorial where we'll be discussing how to load in our own outside datasets, which comes with all sorts of challenges!First, we need a dataset. L...
 

Convolutional Neural Networks - Deep Learning basics with Python, TensorFlow and Keras p.3



Convolutional Neural Networks - Deep Learning basics with Python, TensorFlow and Keras p.3

Hello everyone, and welcome to part three of our Deep Learning with Python, TensorFlow, and Keras tutorial series. In this video, we will be focusing on convolutional neural networks (CNNs) and how to apply them to classify dogs versus cats using the dataset we built in the previous video.

Before we dive into CNNs, let's quickly cover how they work and why they are useful for image data. CNNs involve several steps: convolution, pooling, and then more convolution and pooling. The main idea behind convolution is to extract useful features from an image. We use a convolutional window, typically represented as a matrix (e.g., 3x3), to scan the image and simplify the information within the window to a single value. The window then shifts over and repeats this process multiple times. The stride, which determines how much the window moves, can be adjusted as well.

With Keras, we can specify the window size, and most other details are taken care of automatically. If you want to delve deeper into the intricacies of deep learning, I recommend checking out the "Practical Machine Learning" tutorial series, where the inner workings are explained in more detail, particularly for raw TensorFlow code.

The output of the convolutional layer is a set of features extracted from the image. These features are then typically passed through a pooling layer, with the most common type being max pooling. Max pooling selects the maximum value within a window and shifts it over repeatedly, effectively downsampling the data.

The higher-level idea behind CNNs is that they gradually extract more complex features from the image as you go deeper into the network. The initial layers might identify edges and lines, while deeper layers might recognize more complex shapes like circles or squares. Eventually, the network can learn to identify specific objects or patterns.

To implement a CNN, we need to import the necessary libraries. We import TensorFlow and the Keras modules we'll be using, such as Sequential, Dense, Dropout, Activation, Conv2D, and MaxPooling2D. We also import pickle to load our dataset.

Before feeding the data into the neural network, we should consider normalizing it. In our case, we can scale the pixel data by dividing it by 255, as the pixel values range from 0 to 255. Alternatively, we can use the normalize function from chaos.utils for more complex normalization scenarios.

Next, we start building our model using the Sequential API. We add a Conv2D layer with 64 units and a 3x3 window size. The input_shape is set dynamically using X.shape. We then add an activation layer using the rectified linear activation function (ReLU). Following that, we add a max pooling layer with a 2x2 window size.

We repeat this process by adding another Conv2D layer and a corresponding max pooling layer. At this point, we have a 2D convolutional neural network.

To pass the extracted features to a fully connected layer, we need to flatten the data. We add a Flatten layer before adding a final Dense layer with 64 nodes. Finally, we add an output layer with a single node and specify the activation function, which can be either categorical or binary.

We compile the model by specifying the loss function (categorical or binary cross-entropy), the optimizer (e.g., Adam), and the metrics to evaluate the model's performance (e.g., accuracy).

To train the model, we use the fit method, passing in our input data X and labels Y. We can also specify the batch size (e.g., 32).

We'll use the following code to train the model:

model.fit(X, Y, batch_size=32, validation_split=0.1)

This code will train the model using the input data X and the corresponding labels Y. We set the batch size to 32, which means the model will process 32 samples at a time during training. The validation_split parameter is set to 0.1, which means 10% of the data will be used for validation while training the model.

Once the model is trained, we can evaluate its performance using the test data. We can use the following code to evaluate the model:

model.evaluate(X_test, Y_test)

Here, X_test and Y_test represent the test data and labels, respectively. This code will return the loss value and the accuracy of the model on the test data.

After evaluating the model, we can use it to make predictions on new, unseen data. We can use the predict() function to obtain the predicted labels for the new data. Here's an example:

predictions = model.predict(X_new)

This code will generate predictions for the new data X_new. The predictions will be probabilities for each class, and we can use argmax() function to find the class with the highest probability. Here's an example:

predicted_labels = np.argmax(predictions, axis=1)

This code will assign the class with the highest probability as the predicted label for each sample in the new data.

That's it! You have now trained a convolutional neural network model to classify dogs and cats and used it to make predictions on new data. Remember to save the trained model for future use if needed.

Convolutional Neural Networks - Deep Learning basics with Python, TensorFlow and Keras p.3
Convolutional Neural Networks - Deep Learning basics with Python, TensorFlow and Keras p.3
  • 2018.08.19
  • www.youtube.com
Welcome to a tutorial where we'll be discussing Convolutional Neural Networks (Convnets and CNNs), using one to classify dogs and cats with the dataset we bu...
 

Analyzing Models with TensorBoard - Deep Learning with Python, TensorFlow and Keras p.4



Analyzing Models with TensorBoard - Deep Learning with Python, TensorFlow and Keras p.4

Welcome, everyone, to part 4 of the "Deep Learning with Python: TensorFlow and Keras" tutorial series. In this video and the next, we will be discussing how to analyze and optimize our models using TensorBoard. TensorBoard is a powerful tool that allows us to visualize the training of our models over time. Its main purpose is to help us understand various aspects of our model's performance, such as accuracy, validation accuracy, loss, and validation loss. Additionally, there are more advanced features in TensorBoard that we may explore in future tutorials.

Before we dive into TensorBoard, let's address a minor detail. Although not crucial for this tutorial, I want to point out that even small models tend to consume a significant amount of GPU memory. If you plan to run multiple models simultaneously, you can specify a fraction of the GPU memory that each model should use. By doing this, you can avoid potential issues when running multiple models or encountering memory constraints. For example, I typically set the model to use one-third of the GPU memory. This approach has proven helpful when running multiple models concurrently, such as in the "Python Plays GTA" series involving object detection and self-driving. It's just a handy tip that can save you some time and headaches.

Now, let's proceed with the main topic. The first thing I want to address is adding an activation function after the dense layer. It was an oversight on my part not to include it initially. Adding an activation function is essential because without it, the dense layer becomes a linear activation function, which is not suitable for our purposes. We want to avoid regression and ensure our model performs optimally. So, let's quickly fix that by inserting the activation function before the dense layer.

With that correction made, we should observe a significant improvement in accuracy. While the model is training, let's take a moment to explore the TensorFlow documentation and learn about the various Keras callbacks available. In our case, we will be using the TensorBoard callback to interface with TensorBoard. However, it's worth noting that there are other useful callbacks, such as early stopping based on specific parameters, learning rate scheduling, and model checkpointing. Model checkpointing is particularly valuable when you want to save the model at specific intervals, such as the best loss or validation accuracy. For now, let's focus on the TensorBoard callback, but I may briefly touch upon other callbacks in a future video.

To use the TensorBoard callback, we need to import it from TensorFlow's Keras callbacks module. Add the following line of code to import TensorBoard:

from tensorflow.keras.callbacks import TensorBoard

Now that we have imported the necessary module, let's perform some housekeeping. It's always a good practice to give your model a meaningful name, especially when working with multiple models. In this case, we can name our model something like "cats_vs_dogs_CNN_64x2." Additionally, let's add a timestamp to the name to ensure uniqueness. Including the timestamp is useful when retraining a model or avoiding any confusion with model versions. So, at the beginning of our code, let's define the model name and timestamp as follows:

import time model_name = f"cats_vs_dogs_CNN_64x2_{int(time.time())}"

Now that we have a unique model name, we can create the TensorBoard callback object. Assign the TensorBoard object to a variable called tensorboard_callback using the following code:

tensorboard_callback = TensorBoard(log_dir=f"logs/{model_name}")

Here, we specify the log directory for TensorBoard using the log_dir parameter. We format the log directory path using the model name we defined earlier.

Furthermore, tensor board allows us to analyze and optimize our models by providing visualizations of the training process. It primarily focuses on metrics such as accuracy, validation accuracy, loss, and validation loss. These metrics help us understand how our model is performing over time and identify areas for improvement. While accuracy and loss are commonly monitored, tensor board offers more advanced functionalities that we may explore in the future.

To begin using tensor board, we first need to make a small addition to our code. Although not crucial for this tutorial, it's worth mentioning. By adding a few lines of code, we can allocate a specific fraction of GPU memory for our model. This is beneficial when running multiple models simultaneously or when encountering issues with VRAM. It allows us to control the GPU allocation and avoid potential crashes or memory overflow. In our case, we allocate one-third of the GPU for the model. This ensures smooth execution and prevents any conflicts when working with other models or projects.

Moving on, let's focus on implementing tensor board. Firstly, we need to import the necessary dependencies. We import the "tensorboard" module from the "tensorflow.keras.callbacks" package. This module provides the callback functionality for tensor board.

Next, we want to assign a meaningful name to our model. Giving each model a distinct name is essential when working with multiple models. It helps us keep track of the experiments and avoid any confusion. In this case, we name our model "cat's_first_dog_CNN_64x2_good_enough". Additionally, we add a timestamp to the name using the current time value. This ensures uniqueness and prevents any accidental overwriting of models.

After naming our model, we can define the tensor board callback object. We create an instance of the "TensorBoard" class and assign it to the variable "tensorboard". We pass the log directory path to the constructor. The log directory is where tensor board will store the logs and data related to our model. We use string formatting to include the model name in the log directory path.

Once we have the callback object ready, we can incorporate it into our model training process. In the "fit" method of our model, we pass the callback object to the "callbacks" parameter as a list. In this case, we only have one callback, which is the tensor board callback. However, it's worth noting that you can include multiple callbacks in the list if needed.

With the callback integrated, we can now train our model. We set the number of epochs to 10 for this example. However, feel free to adjust the number of epochs based on your requirements. As the model trains, tensor board will start generating logs and visualizations based on the specified metrics.

To view the tensor board in action, we need to open the command window or terminal and navigate to the directory containing the log files. Once in the correct directory, we run the tensor board command by typing "tensorboard --logdir logs" in the command prompt. This command initiates the tensor board server and provides a local URL where we can access the tensor board interface.

After starting tensor board, we can open a web browser and enter the URL provided by the command prompt. This will display the tensor board interface, where we can visualize the training progress of our model. The interface shows various graphs, including in-sample accuracy, in-sample loss, out-of-sample accuracy, and out-of-sample loss. We analyze these metrics to monitor the model's performance and make informed decisions regarding its optimization.

By observing the graphs, we can identify patterns and trends in the model's behavior. For instance, if the validation loss starts to increase while the validation accuracy remains steady or decreases, it indicates overfitting. On the other hand, if both the validation accuracy and loss improve over time, it suggests that the model is learning effectively.

Tensor board provides a powerful platform for analyzing and optimizing models. Its visualizations offer valuable insights into the training process and facilitate decision-making. By leveraging tensor board, we can streamline the model development process and achieve better results.

In the next part of this tutorial series, we will delve deeper into the advanced features of tensor board, including histograms, distributions, and embeddings. These features provide further granularity and allow us to gain a more comprehensive understanding of our models. Stay tuned for the next video, where we explore these exciting capabilities.

That's it for this tutorial. Thank you for watching, and I'll see you in the next video!

Analyzing Models with TensorBoard - Deep Learning with Python, TensorFlow and Keras p.4
Analyzing Models with TensorBoard - Deep Learning with Python, TensorFlow and Keras p.4
  • 2018.08.24
  • www.youtube.com
Welcome to part 4 of the deep learning basics with Python, TensorFlow, and Keras tutorial series. In this part, what we're going to be talking about is Tenso...
 

Optimizing with TensorBoard - Deep Learning w/ Python, TensorFlow & Keras p.5



Optimizing with TensorBoard - Deep Learning w/ Python, TensorFlow & Keras p.5

Hello everyone and welcome to Part Five of the Deep Learning with Python TensorBoard and Keras tutorial series. In this tutorial, we will focus on TensorBoard and how we can use it to optimize models by visualizing different model attempts. Let's dive into it!

First, let's analyze the model and identify the aspects we can tweak to improve its performance. While our current model achieved around 79% accuracy, we believe we can do better. Some potential areas for optimization include the optimizer, learning rate, number of dense layers, units per layer, activation units, kernel size, stride, decay rate, and more. With numerous options to explore, we may end up testing thousands of models. So, where do we start?

To make things easier, let's begin with the most straightforward modifications. We will focus on adjusting the number of layers, nodes per layer, and whether or not to include a dense layer at the end. For the number of dense layers, we'll consider zero, one, or two. Regarding layer sizes, we'll use values of 32, 64, and 128. These values are just conventions, and you can choose different ones as per your preferences.

Now, let's implement these changes in our code. We'll define some variables for the choices we want to make, such as the number of dense layers and layer sizes. We'll iterate through these variables to create different model combinations. Additionally, we'll create a name for each model that reflects its configuration.

Once we have the model configurations, we can proceed to apply them in our code. We'll update the model structure accordingly, considering the input shape, convolutional layers, dense layers, and output layer. We'll also ensure that the layer sizes are appropriately adjusted.

With all the changes made, it's time to run the code. However, since training numerous models can be time-consuming, I've already run the code and saved the results. Let's proceed to analyze the results using TensorBoard.

We load the TensorBoard logs and observe the different model combinations. The models are organized based on their performance, specifically the validation loss. We focus on the best-performing models and note their configurations.

From the results, it becomes apparent that models with three convolutional layers and zero dense layers consistently perform well. The specific number of nodes per layer seems less significant. However, it's worth noting that larger dense layers, such as 512 or 256 nodes, might yield even better results. To verify this, you can test different dense layer sizes.

To summarize, we started by exploring various model configurations using TensorBoard. We found that models with three convolutional layers and no dense layers consistently performed well. We also identified that the number of nodes per layer could be further optimized. By testing different dense layer sizes, we can potentially improve the model's accuracy even further.

Keep in mind that this is just a starting point, and there are many other aspects you can tweak to optimize your models. TensorBoard provides a valuable tool for visualizing and analyzing these model variations, helping you make informed decisions for model improvement.

Optimizing with TensorBoard - Deep Learning w/ Python, TensorFlow & Keras p.5
Optimizing with TensorBoard - Deep Learning w/ Python, TensorFlow & Keras p.5
  • 2018.08.25
  • www.youtube.com
Welcome to part 5 of the Deep learning with Python, TensorFlow and Keras tutorial series. In the previous tutorial, we introduced TensorBoard, which is an ap...
 

How to use your trained model - Deep Learning basics with Python, TensorFlow and Keras p.6



How to use your trained model - Deep Learning basics with Python, TensorFlow and Keras p.6

Hello everyone and welcome to Part 6 of the Deep Learning in Python with TensorFlow and Keras tutorial series!

In this video, we'll be discussing how to use our trained model to make predictions on new images. Many people have been asking about this, as they have successfully trained and tested their datasets but are unsure how to use the model for predicting on external images. So, let's dive into it!

First, we need to import the necessary libraries. We'll import cv2 for image processing and tensorflow as TF for working with our model. We'll also need the categories list, which contains the class labels "dog" and "cat" that we used during training.

Next, we'll define a function called prepare which takes a file path as a parameter. This function will handle the preprocessing steps required for the input image. We'll resize the image to a specific size and convert it to grayscale. The image will then be returned as a reshaped numpy array.

After that, we'll load our trained model using the TF.keras.models.load_model() function. Previously, we saved our model as a "64 by 3 CNN model," so we'll load it using the same name.

Now, we're ready to make predictions. We'll define a variable called prediction and assign it the result of calling model.predict() on our prepared image. It's important to note that the predict() method expects a list as input, even if we are predicting on a single image. So, we need to pass the prepared image as a list.

Once we have the prediction result, we can print it out. However, the prediction is currently in the form of a nested list. To make it more readable, we can convert the prediction value to an integer and use it as an index to retrieve the corresponding class label from the categories list.

Finally, we can print the predicted class label, which represents whether the image is classified as a dog or a cat.

In this tutorial, we have used two external images for testing our model: one of a dog with a cone of shame and another of an unknown creature. These images were not part of our training dataset, ensuring that we are making predictions on unseen data.

To try it out with your own dog and cat images, follow the steps outlined in the code. Keep in mind that the accuracy may vary, but on average, it should be around 80%.

That's all for now! I want to thank our recent sponsors: Michael, Nick, Rodrigo, and Papasan E. Your support is greatly appreciated. If you have any questions, comments, or suggestions for future tutorials, please leave them below. I'm also open to ideas for using recurrent neural networks, so if you have a simple dataset in mind, let me know.

I'll see you in the next tutorial, where we'll explore recurrent neural networks. Until then, happy coding!

How to use your trained model - Deep Learning basics with Python, TensorFlow and Keras p.6
How to use your trained model - Deep Learning basics with Python, TensorFlow and Keras p.6
  • 2018.08.31
  • www.youtube.com
In this part, we're going to cover how to actually use your model. We will us our cats vs dogs neural network that we've been perfecting.Text tutorial and sa...
 

Recurrent Neural Networks (RNN) - Deep Learning w/ Python, TensorFlow & Keras p.7


Recurrent Neural Networks (RNN) - Deep Learning w/ Python, TensorFlow & Keras p.7

Hi everyone, and welcome to Part 7 of the Deep Learning with Python TensorFlow in Chaos tutorial series. In this part, we will be focusing on the recurrent neural network (RNN). The purpose of an RNN is to capture the significance and importance of the order of data. This is particularly relevant in time series data, where data is organized temporally, and in natural language processing, where the order of words in a sentence carries meaning.

To illustrate the concept, let's consider the example of a sentence: "Some people made a neural network." When this sentence is processed by a deep neural network, which typically tokenizes the data by splitting it into individual words, the network may fail to capture the correct meaning. For instance, the sentence "A neural network made some people" has a completely different meaning. This emphasizes the importance of the order of words in determining the meaning of a sentence.

Now, let's delve into the workings of a recurrent neural network. The basic building block of an RNN is the recurrent cell, which is often implemented using a long short-term memory (LSTM) cell. Although other options like the gated recurrent unit (GRU) exist, the LSTM cell is commonly used. In an RNN, each cell takes sequential data as input and outputs to the next layer or the next cell in the recurrent layer.

The output from a cell can be directed in different ways. It can go to the next layer or the next cell in a unidirectional or bidirectional manner. In this tutorial, we will focus on a basic unidirectional RNN. To illustrate this, imagine a green box representing a recurrent cell. Data from the previous cell enters the current cell, which performs operations such as forgetting irrelevant information from the previous node, incorporating new input data, and deciding what information to output to the next layer or node.

To better visualize this process, let's consider a specific cell in the layer. The green box represents the current cell. Data flows in from the previous cell, wraps around, and enters the LSTM cell. Within the cell, there are operations for forgetting information from the previous node, incorporating new input data, and determining the output to be passed to the next layer or node. These operations collectively enable the LSTM cell to retain important information and pass it along to subsequent layers or nodes.

Implementing an RNN can be complex, especially when dealing with scalar values. If you are interested in a detailed explanation of how LSTM cells work, I recommend checking out a comprehensive guide that explains them in-depth. I have included a link to this guide in the text version of the tutorial for your reference.

Now, let's move on to building a basic recurrent neural network. In this tutorial, we will start with a simple example using the M-NIST dataset. In the next tutorial, we will work with more realistic time series data, specifically focusing on cryptocurrency prices.

To begin, let's import the necessary libraries. We will import TensorFlow as tf, the Sequential model from tensorflow.keras.models, the Dense layer from tensorflow.keras.layers, as well as Dropout and LSTM cells. Note that if you are using the GPU version of TensorFlow, there is also an optimized LSTM cell called the KU DNN LSTM cell. However, for this tutorial, we will stick to the regular LSTM cell. If you are using the CPU version of TensorFlow, the computation might take a significant amount of time.

Next, we need to load the dataset. For this example, we will use the M-NIST dataset. We can easily load it using the tf.keras.datasets.mnist.load_data() function, which returns the training and testing data. Let's unpack the data into variables: X_train, Y_train, X_test, and Y_test.

First, let's normalize the input data by dividing each pixel value by 255. This will scale the pixel values to a range between 0 and 1, which is suitable for neural network training. We can achieve this by dividing both the training and testing data by 255.

X_train = X_train / 255.0 X_test = X_test / 255.0

Next, we need to convert the target labels into one-hot encoded vectors. In the M-NIST dataset, the labels are integers ranging from 0 to 9, representing the digits. One-hot encoding converts each label into a binary vector of length 10, where the index corresponding to the digit is set to 1 and all other indices are set to 0. We can use the to_categorical function from tensorflow.keras.utils to perform one-hot encoding.

from tensorflow.keras.utils import to_categorical

Y_train = to_categorical(Y_train, num_classes=10)
Y_test = to_categorical(Y_test, num_classes=10)


Now, let's define the architecture of our recurrent neural network. We will use the Sequential model from tensorflow.keras.models and add layers to it.

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense

model = Sequential()
model.add(LSTM(128, input_shape=(X_train.shape[1:]), activation='relu', return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))


In the above code, we define a Sequential model and add two LSTM layers with 128 units each. The input_shape parameter of the first LSTM layer is set to the shape of a single data sample in the training set (X_train.shape[1:]). We also specify the activation function for the LSTM layers as 'relu'. The return_sequences=True argument in the first LSTM layer ensures that it returns the output sequence, which is required for connecting it to the next LSTM layer. We also add Dropout layers with a dropout rate of 0.2 after each LSTM layer to prevent overfitting. Finally, we add a Dense layer with 10 units and the softmax activation function, which will output the probabilities of the 10 classes.

Now, let's compile the model by specifying the loss function, optimizer, and evaluation metric.

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

In this example, we use categorical cross-entropy as the loss function, which is suitable for multi-class classification problems. We use the Adam optimizer, which is a popular choice for training neural networks. We also specify 'accuracy' as the evaluation metric to monitor the model's performance during training.

Finally, let's train the model using the training data and evaluate it on the testing data.

model.fit(X_train, Y_train, batch_size=32, epochs=10, validation_data=(X_test, Y_test))

In the fit function, we provide the training data and labels (X_train and Y_train), specify the batch size and number of epochs for training, and also provide the testing data and labels for validation during training (validation_data=(X_test, Y_test)).

That's it! You have now built a recurrent neural network using LSTM cells for the M-NIST dataset. You can experiment with different architectures, hyperparameters, and datasets to further explore the capabilities of RNNs.

Recurrent Neural Networks (RNN) - Deep Learning w/ Python, TensorFlow & Keras p.7
Recurrent Neural Networks (RNN) - Deep Learning w/ Python, TensorFlow & Keras p.7
  • 2018.09.07
  • www.youtube.com
In this part we're going to be covering recurrent neural networks. The idea of a recurrent neural network is that sequences and order matters. For many opera...
 

Cryptocurrency-predicting RNN intro - Deep Learning w/ Python, TensorFlow and Keras p.8



Cryptocurrency-predicting RNN intro - Deep Learning w/ Python, TensorFlow and Keras p.8

Hello everyone and welcome to another deep learning with Python tutorial video. In this video and the upcoming ones, we will be discussing how to apply a recurrent neural network (RNN) to a more realistic example of working with sequential data. Specifically, we will be working with a time series dataset that consists of prices and volumes for cryptocurrencies.

Before we dive into the details, I want to clarify that you can apply the same concepts to other types of sequential data, such as stock prices or sensor data. So, even if you're not interested in finance, you can still follow along and understand the concepts.

The goal of this tutorial is to use a recurrent neural network to predict the future price of a cryptocurrency based on its past price and volume. We will focus on four major cryptocurrencies: Bitcoin, Litecoin, Ethereum, and Bitcoin Cash. The idea is to take the last 60 minutes of price and volume data for each of these cryptocurrencies and use it as input to predict the price of Litecoin, for example, three minutes into the future.

This type of prediction problem can also be applied to other domains, such as predicting server failures or website traffic based on time and usage data. The ultimate goal is to either predict a classification (e.g., whether the price will rise or fall) or perform regression (e.g., predicting the actual price or percentage change).

Working with sequential data presents unique challenges. Firstly, we need to preprocess the data and convert it into sequences that the recurrent neural network can handle. Additionally, we need to balance and normalize the data, considering that the prices and volumes of different cryptocurrencies may have different scales. Scaling the data is more complex than in other domains, like image data, where we simply divide by 255.

Furthermore, evaluating the performance of the model using out-of-sample data is a different challenge when working with sequential data. There are several aspects we need to cover, including data preparation, normalization, and evaluation.

To get started, I have provided a dataset for you to download. You can find the download link in the description of the tutorial. Once you extract the downloaded zip file, you will find four files, each corresponding to the price and volume data of one cryptocurrency.

We will use the pandas library in Python to read and manipulate the dataset. If you don't have pandas installed, you can do so by running the command pip install pandas in your terminal or command prompt.

Next, we will read in the dataset using pandas and examine the data. We will focus on the "close" price and volume columns for each cryptocurrency. To merge the data from different files, we will set the "time" column as the index for each dataframe. Then, we will join the dataframes based on their shared index.

Once we have the merged dataframe, we need to define some parameters. These include the sequence length (the number of past periods to consider), the future period (the number of periods into the future to predict), and the ratio to predict (the cryptocurrency we want to predict).

In our case, we will focus on predicting the future price of Litecoin (LTC) based on the last 60 minutes of data, and we will predict three minutes into the future. We will also define a classification rule, where we classify the prediction as a price increase or decrease based on the current and future prices.

With these initial steps completed, we are now ready to preprocess the data, create sequences, and train the recurrent neural network. We will cover these topics in the upcoming videos, so stay tuned.

If you want to follow along, make sure to download the dataset and set up the required libraries. You can find the complete code and instructions in the text-based version of the tutorial, which is available in the description.

We have a lot to cover, so let's we'll classify it as 1, indicating a price rise. Otherwise, if the future price is lower than the current price, we'll classify it as 0, indicating a price fall. This is a simple rule we're using for classification, but you can experiment with different rules or even use regression to predict the actual price change.

Now, let's create the target column in our main DataFrame. We'll use the shift function from pandas to shift the values of the "LTCUSD_close" column by the future period. This will give us the future prices that we'll compare with the current prices to determine the classification. We'll assign the result to a new column called "target".

main_df['target'] = main_df['LTCUSD_close'].shift(-future_period)

Since we shifted the values negatively, the last few rows of the "target" column will have NaN (not a number) values because there is no future price data available. We'll remove those rows from the DataFrame.

main_df.dropna(inplace=True)

Next, let's create the input sequences. We'll iterate through the DataFrame and create sequences of length sequence_length, consisting of the previous prices and volumes of Bitcoin, Litecoin, Ethereum, and Bitcoin Cash. We'll store these sequences in a list called "sequences".

sequences = []
for i in range(len(main_df) - sequence_length + 1):
    sequence = main_df.iloc[i:i+sequence_length, 1:-1].values.flatten()
    sequences.append(sequence)


In the above code, main_df.iloc[i:i+sequence_length, 1:-1] selects the rows and columns for the sequence, excluding the first (time) and last (target) columns. We then use .values.flatten() to convert the selected data into a 1D array.

Finally, we'll convert the sequences and targets into numpy arrays for easier manipulation and training.

import numpy as np

sequences = np.array(sequences)
targets = np.array(main_df['target'])


Now we have our input sequences and corresponding targets ready for training a recurrent neural network. We can proceed to build and train the model using libraries such as TensorFlow or PyTorch.

Please note that the code provided here is a partial implementation and focuses on the data preprocessing steps. You will need to further develop the model architecture, train the model, and evaluate its performance. Additionally, you might need to adjust the hyperparameters and experiment with different techniques to improve the model's accuracy.

Remember to import the necessary libraries, handle missing data, preprocess the features (normalization, scaling, etc.), and split the data into training and testing sets before training the model.

I hope this helps you understand the process of working with sequential data and applying recurrent neural networks to predict future prices. Good luck with your deep learning project!

Cryptocurrency-predicting RNN intro - Deep Learning w/ Python, TensorFlow and Keras p.8
Cryptocurrency-predicting RNN intro - Deep Learning w/ Python, TensorFlow and Keras p.8
  • 2018.09.15
  • www.youtube.com
Welcome to part 8 of the Deep Learning with Python, Keras, and Tensorflow series. In this tutorial, we're going to work on using a recurrent neural network t...
 

Normalizing and creating sequences Crypto RNN - Deep Learning w/ Python, TensorFlow and Keras p.9


Normalizing and creating sequences Crypto RNN - Deep Learning w/ Python, TensorFlow and Keras p.9

Hello everyone, and welcome back to another episode of the Deep Learning with Python, TensorFlow, and Chaos tutorial series. In this video, we will continue working on our mini-project of implementing a recurrent neural network (RNN) to predict the future price movements of a cryptocurrency. We will be using the sequences of the currency's prices and volumes, along with three other cryptocurrency prices and volumes.

So far, we have obtained the data, merged it, and created the targets. Now, let's move on to the next steps. We need to create sequences from the data and perform tasks such as balancing, normalization, and scaling. However, before diving into those tasks, it is crucial to address the issue of out-of-sample testing.

When dealing with temporal and time series data, shuffling and randomly selecting a portion as the out-of-sample data can lead to a biased model. In our case, with sequences of 60 minutes and a prediction window of 3 minutes, randomly selecting the out-of-sample data could result in similar examples being present in both the in-sample and out-of-sample sets. This would make it easier for the model to overfit and perform poorly on unseen data.

To tackle this, we need to carefully select the out-of-sample data. For time series data, it is recommended to choose a chunk of data from the future as the out-of-sample set. In our case, we will take the last 5% of the historical data as our out-of-sample data. This approach simulates building the model 5% of the time ago and forward testing it.

Now, let's implement this separation of out-of-sample data. We will sort the data based on the timestamp and find the threshold of the last 5% of times. By separating the data in this way, we ensure that the out-of-sample set contains data from the future, preventing data leakage and biased testing. Once separated, we will have the validation data and the training data.

Before proceeding further, it is important to note that we need to preprocess both the validation and training data. We will create sequences, balance the data, normalize it, scale it, and perform other necessary tasks. To streamline this process, let's create a function called preprocess_df that takes a DataFrame as input and performs all these preprocessing steps.

First, we import the preprocessing module from the sklearn library. If you don't have it installed, you can do so by running pip install sklearn. Then, we define the preprocess_df function that takes a DataFrame as a parameter.

Within the function, we start by dropping the unnecessary future column from the DataFrame. Next, we iterate over the columns of the DataFrame and apply the percent change transformation. This normalization step helps in handling different magnitudes of prices and volumes across cryptocurrencies.

After normalizing the data, we drop any rows that contain NaN values, as they can cause issues during training. Then, we use the preprocessing.scale function to scale the values between 0 and 1. Alternatively, you can implement your own scaling logic.

Now that we have preprocessed the data, let's move on to handling sequential data. We create an empty list called sequential_data and initialize a deque object called prev_days with a maximum length of 60. The deque object allows us to efficiently append new items and automatically remove old ones when it reaches the maximum length.

Next, we iterate over the values of the DataFrame, which now contains the normalized and scaled data. For each row, we append the values to the prev_days deque. Once the deque reaches a length of at least 60, we start populating the sequential_data

for i in DF.values:
    # Create a deque object with a maximum length of max_len
    if len(prev_days) < max_len:
        prev_days.append([n for n in i[:-1]])
    else:
        # Append the current values to prev_days
        prev_days.append([n for n in i[:-1]])
        # Add the sequence to the sequential_data list
        sequential_data.append([np.array(prev_days), i[-1]])
        # Remove the oldest sequence from prev_days
        prev_days.popleft()


In this code snippet, we iterate over the values of the data frame DF. We check if the length of prev_days (a deque object) is less than max_len. If it is, we append the current values (i[:-1]) to prev_days. Otherwise, we append the current values to prev_days, add the sequence [np.array(prev_days), i[-1]] to the sequential_data list, and remove the oldest sequence from prev_days using prev_days.popleft().

Now that we have generated the sequences, we can proceed with balancing the data. Balancing is important to prevent any bias towards a specific class in our training data. In this case, our classes are the different price movements (up or down). To balance the data, we'll count the number of occurrences for each class and limit the number of sequences for the majority class to match the minority class.

buys = []
sells = []

for seq, target in sequential_data:
    if target == 0:
        sells.append([seq, target])
    elif target == 1:
        buys.append([seq, target])

# Determine the minimum number of sequences in buys and sells
lower = min(len(buys), len(sells))

# Balance the data by randomly selecting the required number of sequences from buys and sells
buys = buys[:lower]
sells = sells[:lower]

# Concatenate buys and sells to create balanced_data
balanced_data = buys + sells

# Shuffle the balanced_data
random.shuffle(balanced_data)


In the above code, we create two separate lists: buys and sells. We iterate over the sequential_data list and append sequences with the corresponding target value to the respective list. Next, we determine the minimum number of sequences between buys and sells. We limit both lists to this minimum number to balance the data. Then, we concatenate buys and sells to create the balanced_data. Finally, we shuffle the balanced_data to ensure randomness.

After balancing the data, we can split it into input features (X) and target labels (y) arrays.

X = []
y = []

for seq, target in balanced_data:
    X.append(seq)
    y.append(target)

# Convert X and y to numpy arrays
X = np.array(X)
y = np.array(y)


Now that we have X as the input features and y as the target labels, we can proceed with splitting the data into training and validation sets.

# Split the data into training and validation sets
train_x, val_x, train_y, val_y = train_test_split(X, y, test_size=0.2, random_state=42)


In the above code, we use the train_test_split function from scikit-learn to split the data into training and validation sets. We assign 80% of the data to the training set (train_x and train_y) and 20%

# Normalize the data
scaler = MinMaxScaler()
train_x = scaler.fit_transform(train_x.reshape(train_x.shape[0], -1))
val_x = scaler.transform(val_x.reshape(val_x.shape[0], -1))

# Reshape the data back to its original shape
train_x = train_x.reshape(train_x.shape[0], train_x.shape[1], -1)
val_x = val_x.reshape(val_x.shape[0], val_x.shape[1], -1)


In the code above, we use the MinMaxScaler from scikit-learn to normalize the input features (train_x and val_x). The fit_transform method is used on the training set to fit the scaler and normalize the data. Then, we use the transform method on the validation set to apply the same scaling based on the training set. The data is reshaped before and after scaling to match the desired shape.

With the data prepared, we can now build and train the LSTM model:

model = Sequential()
model.add(LSTM(units=128, input_shape=(train_x.shape[1:]), return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())

model.add(LSTM(units=128, return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())

model.add(LSTM(units=128))
model.add(Dropout(0.2))
model.add(BatchNormalization())

model.add(Dense(units=32, activation='relu'))
model.add(Dropout(0.2))

model.add(Dense(units=2, activation='softmax'))

# Define the optimizer and compile the model
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

# Define early stopping
early_stopping = tf.keras.callbacks.EarlyStopping(patience=3)

# Train the model
history = model.fit(train_x, train_y, validation_data=(val_x, val_y), epochs=20, callbacks=[early_stopping])


In the code above, we define a sequential model using Sequential(). We add three LSTM layers with 128 units each, followed by dropout layers with a rate of 0.2 to prevent overfitting. Batch normalization is applied after each LSTM layer. We then add a dense layer with 32 units and a ReLU activation function, followed by another dropout layer. Finally, we add a dense layer with 2 units (representing the two classes: buy and sell) and a softmax activation function.

We define the optimizer as Adam with a learning rate of 0.001 and compile the model using sparse categorical cross-entropy as the loss function and accuracy as the metric.

Early stopping is defined using the EarlyStopping callback to monitor the validation loss and stop training if it doesn't improve after 3 epochs.

The model is trained using the fit function, passing the training data (train_x and train_y), validation data (val_x and val_y), and the defined callbacks. The training is performed for 20 epochs.

You can adjust the model architecture, hyperparameters, and training configuration based on your specific requirements.

# Evaluate the model
test_x = scaler.transform(test_x.reshape(test_x.shape[0], -1))
test_x = test_x.reshape(test_x.shape[0], test_x.shape[1], -1)

loss, accuracy = model.evaluate(test_x, test_y)
print(f'Test Loss: {loss:.4f}')
print(f'Test Accuracy: {accuracy*100:.2f}%')


In the code above, we evaluate the trained model using the test data. First, we normalize the test input data using the transform method of the scaler and reshape it to match the desired shape.

Then, we use the evaluate method of the model to compute the loss and accuracy on the test data. The evaluate method takes the test input data (test_x) and the corresponding ground truth labels (test_y). The computed loss and accuracy are printed to the console.

Remember to import the necessary modules at the beginning of your script:

import numpy as np
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dropout, BatchNormalization, Dense


Additionally, ensure that you have the necessary datasets (train_x, train_y, val_x, val_y, test_x, test_y) prepared before executing the code.

This code will allow you to train an LSTM model for your buy/sell classification task, normalize the data, and evaluate the model's performance on the test set. Feel free to make any adjustments or modifications as per your specific requirements.

Normalizing and creating sequences Crypto RNN - Deep Learning w/ Python, TensorFlow and Keras p.9
Normalizing and creating sequences Crypto RNN - Deep Learning w/ Python, TensorFlow and Keras p.9
  • 2018.09.16
  • www.youtube.com
Leading up to this tutorial, we've learned about recurrent neural networks, deployed one on a simpler dataset, and now we are working on doing it with a more...
Reason: