Machine Learning and Neural Networks - page 68

 

4.3 NumPy Array Math and Universal Functions (L04: Scientific Computing in Python)



4.3 NumPy Array Math and Universal Functions (L04: Scientific Computing in Python)

After investing a considerable amount of time in creating a race and indexing individual values in an array, let's move on to a more intriguing topic: non-pay array, math, and universal functions.

Universal functions, often abbreviated as Ufunk or Frank, are a powerful concept in programming. A universal function (Ufunk) is a shortened form of a universal function, which allows for more efficient and convenient working with Numpy arrays. It introduces a concept called vectorization.

Vectorization involves performing a mathematical or arithmetic operation on a sequence of objects, such as an array. Instead of executing the operation individually on each element of the array, vectorization allows us to perform the operation in parallel, taking advantage of the lack of dependencies between the elements.

For example, let's consider the task of adding a number to every element in an array. With a Python for loop, we would iterate over each element and call an addition function. However, with vectorization, we can perform the addition on the entire array simultaneously, without the need for a loop. This significantly improves efficiency.

In Numpy, vectorization is achieved using universal functions (Ufunk). There are more than 60 Ufunk implemented in Numpy, each serving a specific purpose. It's recommended to refer to the official documentation for a complete list of available Ufunk.

To illustrate the concept, let's focus on element-wise addition, a common operation. Suppose we have a two-dimensional array implemented as a list of lists in Python. If we want to add 1 to each element, we would typically use nested loops or list comprehensions. However, these approaches can be inefficient, especially for large arrays.

In Numpy, we can use the Ufunk "np.add" to add the number 1 to the entire array in a vectorized manner. This eliminates the need for explicit loops and significantly improves performance.

It's worth mentioning that Numpy leverages operator overloading, which allows for intuitive usage of Ufunk. For instance, using the "+" operator between an array and a number automatically invokes the "np.add" Ufunk.

Another useful Ufunk is "np.square," which squares each element in an array. Ufunk functions can be unary (operating on a single value) or binary (taking two arguments). The official Numpy documentation provides more details on the available Ufunk.

Moving on to a more interesting case, let's explore the use of Ufunk in conjunction with the "reduce" method. The "reduce" operation applies an operation along a specified axis, reducing multiple values into a single value. For example, we can compute column sums by using "np.add" with the "reduce" method.

In this scenario, we roll over the specified axis (axis 0 in this case) and combine the elements using the specified operation. The "reduce" operation is commonly associated with concepts like "map reduce" and Hadoop, where computations are distributed across multiple nodes and then combined to produce the final result.

While this may seem overwhelming, understanding these concepts allows for more efficient and effective programming with Numpy. By leveraging Ufunk and vectorization, we can perform complex operations on arrays with ease and optimize our code for improved performance.

Remember to refer to the official Numpy documentation for a comprehensive list of available Ufunk, as well as examples and usage guidelines. Exploring the possibilities of Ufunk will expand your toolkit and help you tackle various computational tasks in future projects.

So, in NumPy, we have a function called reduce, which allows us to perform reduction operations along a specified axis of an array. The reduction operation combines multiple values into a single value. By default, the reduction is applied along the first axis (axis 0) of the array.

Let's take an example to understand this concept better. Consider the following array:

array([[1, 2, 3],
       [4, 5, 6],
       [7, 8, 9]])
If we want to compute the column sums, we can use the reduce function. This operation will roll over the first axis (axis 0) and combine the values in each column. So, the result will be a 1-dimensional array containing the sums of each column.

To achieve this, we can use the np.add function, which performs element-wise addition. We pass np.add as the function argument to reduce, indicating that we want to add the values along the specified axis.

Here's how the code looks like:

import numpy as np

array = np.array([[1, 2, 3],
                  [4, 5, 6],
                  [7, 8, 9]])

column_sums = np.reduce(np.add, array)
print(column_sums)
The output will be:

[12 15 18]
In this example, the reduce function iterates over the columns of the array and adds the values together. It combines the first column (1 + 4 + 7), second column (2 + 5 + 8), and third column (3 + 6 + 9) into a single array representing the column sums.

This approach is more efficient than manually iterating over the columns and adding the values one by one. The vectorized operation provided by NumPy allows us to perform the computation in parallel, taking advantage of optimized underlying algorithms.

Keep in mind that reduce can be used with various other functions besides np.add, depending on the operation you want to perform. The concept of reduction is powerful and can be applied to many different scenarios.

4.3 NumPy Array Math and Universal Functions (L04: Scientific Computing in Python)
4.3 NumPy Array Math and Universal Functions (L04: Scientific Computing in Python)
  • 2020.09.20
  • www.youtube.com
This video discusses one of the core aspects of NumPy: it's functions that allow us to work with NumPy arrays efficiently.Jupyter notebook: https://github.co...
 

4.4 NumPy Broadcasting (L04: Scientific Computing in Python)



4.4 NumPy Broadcasting (L04: Scientific Computing in Python)

NumPy offers a fascinating feature known as "broadcasting," which introduces implicit dimensions and enables us to perform operations that would typically be impossible within the confines of strict linear algebra. This concept of broadcasting allows for more flexibility and convenience when working with arrays.

By leveraging broadcasting, NumPy can automatically align arrays with different shapes, essentially expanding them to match and perform element-wise operations. This implicit dimension creation allows us to seamlessly execute operations on arrays of varying sizes, resulting in concise and efficient code.

In the context of linear algebra, where strict adherence to mathematical rules governs operations, broadcasting provides a powerful tool to simplify complex computations. It allows us to perform calculations on arrays with disparate shapes, eliminating the need for manual reshaping or looping through elements.

Thanks to broadcasting, we can effortlessly apply operations on arrays with implicit dimensions, achieving results that might otherwise require extensive manual manipulation. This capability expands the scope of what we can accomplish with NumPy, making it a versatile and indispensable library for scientific computing and data analysis.

4.4 NumPy Broadcasting (L04: Scientific Computing in Python)
4.4 NumPy Broadcasting (L04: Scientific Computing in Python)
  • 2020.09.20
  • www.youtube.com
One of the cool things about NumPy is that it allows us to "broadcast." Here, that means that it is creating implicit dimensions that allow us to do things t...
 

4.5 NumPy Advanced Indexing -- Memory Views and Copies (L04: Scientific Computing in Python)



4.5 NumPy Advanced Indexing -- Memory Views and Copies (L04: Scientific Computing in Python)

In this fifth video, we will delve into the topic of indexing once again. However, unlike the initial video where we covered basic indexing, we will now explore advanced indexing. This segment will introduce concepts such as memory views and creating memory copies, which are crucial practices to avoid unintentional mistakes, such as overwriting array values mistakenly. Understanding this is vital as it helps us prevent bugs and unexpected behavior in NumPy.

Now, let's begin. In the previous section, we discussed one aspect of NumPy arrays called "views." Views are created when we use regular indexing or basic slicing operations. These views act as implicit dimensions, enabling us to perform operations that are not feasible within the strict mathematical framework of linear algebra. However, working with views can be risky since we might accidentally modify the original array without realizing it.

To illustrate this, let's consider a simple example. Suppose we have a two-dimensional array with two rows and three columns. For convenience, I will assign the first row to a separate variable called "first_row." Now, here's the crucial point: assigning the first row to a variable creates a view, not a new object. It means that this variable merely points to the original array's location in memory. Consequently, if we modify the values in this variable, we will also modify the corresponding values in the original array.

To demonstrate this, let's increment each element in the "first_row" variable by 99. Executing this operation will not only change the values in the variable but also overwrite the values in the first row of the original array. This behavior serves as a hint that we are working with a view rather than an independent object. Not being aware of this can be dangerous, as it is easy to unintentionally overwrite values in the original array while working with a view.

On the other hand, views can be incredibly useful for memory efficiency since they allow us to avoid unnecessary array copies. However, there are situations where we may want to create a copy of an array explicitly. For this purpose, we can use the "copy" function, which generates a new array with the same values as the original. In the example provided, I create a copy of the second row of the array using the "copy" function. By doing this, any modifications made to the "first_row" variable will not affect the original array.

It is important to note that while slicing and integer-based indexing create memory views, there is another type of indexing called "fancy indexing" that produces copies of the array. Fancy indexing refers to using multiple integer indices to select specific elements from an array. This feature is called "fancy" because it is not supported in regular Python lists. However, NumPy allows us to perform this type of indexing, which can be quite powerful.

For instance, in a regular Python list, we cannot simultaneously retrieve the first and third elements. Yet, in NumPy, we can achieve this using fancy indexing. Similarly, we can use fancy indexing to select specific columns from a two-dimensional array. It is worth noting that fancy indexing always results in a copy of the array, not a view.

The distinction between views and copies is related to the efficiency considerations in NumPy. Slicing allows us to cache certain values in memory, optimizing performance. However, implementing this caching mechanism with fancy indexing is not straightforward since we cannot extract a contiguous chunk of memory. Instead, we select individual values, leading to the creation of a new array. This behavior explains why fancy indexing produces copies rather than views.

Another interesting aspect of fancy indexing is that it enables us to rearrange the order of columns in an array. By specifying the desired column indices using fancy indexing, we can shuffle the columns as needed.

Boolean masks in NumPy are an efficient and powerful way to filter arrays based on certain conditions. A Boolean mask is simply a NumPy array of Boolean values (True or False) that has the same shape as the original array. By applying the Boolean mask to the original array, we can select the elements that satisfy the given condition and discard the rest.

To create a Boolean mask, we first define a condition that returns a Boolean value for each element in the array. For example, let's say we have an array called arr:

import numpy as np
arr = np.array([1, 2, 3, 4, 5])
We can create a Boolean mask based on a condition, such as selecting only the elements that are greater than 3:
mask = arr > 3
The resulting Boolean mask, mask, will be [False, False, False, True, True]. Each element in the mask corresponds to the same position in the original array, indicating whether the condition is true or false for that element.

To apply the Boolean mask and retrieve the elements that satisfy the condition, we can simply use the mask as an index for the array:

filtered_arr = arr[mask]
The resulting filtered_arr will be [4, 5], which contains only the elements greater than 3 from the original array.

Boolean masks can be combined using logical operators such as & (and), | (or), and ~ (not) to create more complex conditions. For example:

mask = (arr > 2) & (arr < 5)
filtered_arr = arr[mask]
This condition selects elements that are greater than 2 and less than 5, resulting in filtered_arr being [3, 4].

Boolean masks are particularly useful when working with large datasets or performing data filtering and analysis. They allow for efficient and concise operations on arrays without the need for explicit loops or condition checks.

In addition to filtering arrays, Boolean masks can also be used for element assignment. By assigning new values to the selected elements through the Boolean mask, we can modify specific parts of the array based on a condition.

Overall, Boolean masks provide a flexible and efficient way to manipulate and filter NumPy arrays based on specified conditions, making them a valuable tool in data processing and analysis.

4.5 NumPy Advanced Indexing -- Memory Views and Copies (L04: Scientific Computing in Python)
4.5 NumPy Advanced Indexing -- Memory Views and Copies (L04: Scientific Computing in Python)
  • 2020.09.20
  • www.youtube.com
We previously covered basic indexing. NumPy allows us to perform more advanced indexing beyond what regular Python lists are capable of. Jupyter notebook: ht...
 

4.6 NumPy Random Number Generators (L04: Scientific Computing in Python)



4.6 NumPy Random Number Generators (L04: Scientific Computing in Python)

In this video, we will provide a brief overview of random number generators in NumPy. While we won't cover all the different methods for generating random numbers in NumPy, our focus will be on understanding random number generators and their practical utility.

Let's begin with a simple example. We'll start by importing NumPy, which is the library we'll be using for random number generation. NumPy has a random module that contains various functions for drawing random numbers. Although the documentation we'll be referencing is a bit dated, it provides a helpful list of different functions and their descriptions.

One commonly used function is random.rand, which generates random samples from a uniform distribution. By specifying the shape of the desired array (e.g., 2x3), this function will produce a two-dimensional array filled with random numbers from a uniform distribution.

NumPy offers other functions as well, such as random.random, which generates random floats in the half-open interval [0, 1). You can also draw random samples from different distributions, like the standard normal distribution, using the random.randn function.

Sometimes, we may want to ensure that our code produces the same random results every time it is executed. This is useful for reproducibility, especially when sharing code or comparing different methods. To achieve this, we can set a random seed at the beginning of our code or notebook. The seed is an arbitrary number that ensures the same sequence of random numbers is generated each time.

By setting a random seed, the generated random numbers will remain constant during multiple runs of the code. However, it's important to note that if we draw another random sample, the results will differ because it's still a random process.

Having consistent results can be particularly useful in machine learning applications, such as shuffling data or testing implementations. For example, when splitting a dataset, setting a random seed ensures that the split is the same every time. This allows for accurate comparison and evaluation of different methods.

To manage randomness more granularly, we can use a random state object in NumPy. The random state object has its own random number generator, enabling fine-grained control over where randomness is applied. By creating multiple random state objects, we can have different sources of randomness in our code. This is especially beneficial when we want certain parts of the code to produce consistent results while other parts generate varying random numbers.

While the old random_state class is still widely used, the NumPy community now recommends using the new random generator. This new generator employs a different method for generating random numbers, but for most simple applications, the choice between the two won't make a noticeable difference. What matters most is setting a random seed for reproducibility.

It's important to remember that random number generators in code are not truly random but pseudo-random. They use algorithms to produce sequences of numbers that mimic randomness. In our context, the focus is on consistency and reproducibility rather than the specific algorithm used for random number generation.

In conclusion, when working with random number generators in NumPy, the choice of the generator itself is not critical. What is essential is setting a random seed to ensure consistent and reproducible results. This becomes particularly valuable when sharing code, submitting assignments, or comparing different methods.

4.6 NumPy Random Number Generators (L04: Scientific Computing in Python)
4.6 NumPy Random Number Generators (L04: Scientific Computing in Python)
  • 2020.09.21
  • www.youtube.com
Random number generators and seeds are at the core of reproducible research. In this video, I show you how to use random number generators in NumPy.Jupyter n...
 

4.7 Reshaping NumPy Arrays (L04: Scientific Computing in Python)



4.7 Reshaping NumPy Arrays (L04: Scientific Computing in Python)

Finally, we are approaching the conclusion of the NumPy series. With only three videos remaining, we have reached an important topic: reshaping NumPy arrays. Reshaping arrays is crucial when we need to transform our data into the desired shape, such as converting a matrix into a vector or vice versa. I briefly mentioned this concept in the introductory lecture, where I discussed MNIST. To illustrate this process, let's consider a simplified example.

Imagine we have an array with dimensions 28 by 28, representing an image. Normally, each element in the array would correspond to a pixel value. However, for simplicity's sake, let's assume that each element is just a single digit. So we have a 28 by 28 array representing a digit image. However, if we want to use this array as a feature vector for a classifier, we need to reshape it into a single long vector with 784 elements (28 * 28). Each training example will be an image, and each image will have 784 features.

Reshaping an array can be done using the reshape function in NumPy. For instance, we can reshape a vector 1, 2, 3, 4, 5, 6 into a 2 by 3 matrix:

array([[1, 2, 3],
       [4, 5, 6]])
It's important to note that the dimensions specified during reshaping must match the total number of elements in the original array. If the dimensions are incorrect, an error will occur. For example, trying to reshape a 6-element vector into a 3 by 3 matrix would throw an error because there are not enough elements.

When reshaping an array, a memory view is created rather than a new array. This memory view allows us to manipulate the reshaped array without duplicating the data. To verify this, we can use the np.may_share_memory function, although it may not always provide a 100% accurate result.

The use of -1 as a dimension in reshaping is a convenient feature in NumPy. It acts as a placeholder, allowing the method to determine the appropriate dimension based on the total number of elements. For example, if we have a vector with six elements and reshape it using -1, 2, the -1 will be replaced with 3 since there is only one way to arrange three rows with two columns to obtain six elements. This placeholder concept works with an arbitrary number of dimensions.

Additionally, we can use the reshape function to flatten an array. By specifying a single value as the dimension (e.g., reshape(6)), we can transform the array into a one-dimensional vector. In practice, using -1 is more convenient since it eliminates the need to remember the size. For example, reshape(-1) achieves the same result as reshape(6) for a six-element array.

There are multiple ways to flatten an array in NumPy. The reshape function with -1 creates a memory view, while the flatten function also flattens an array but creates a copy. Another function, ravel, is also used for flattening arrays. Determining the differences between these functions would be a good self-assessment quiz.

Finally, we can concatenate arrays in NumPy, combining them along specified axes. When concatenating arrays along the first axis, it is similar to appending elements in Python lists. For example, if we have two arrays with one axis, concatenating them along that axis will stack one below the other.

Reshaping NumPy arrays is essential for manipulating data into the desired shape. Understanding the various methods, placeholders, and concatenation techniques enables us to work effectively with arrays and optimize our code. In the next video, I will discuss NumPy comparison operators and masks, which are powerful tools when combined with reshaping.

4.7 Reshaping NumPy Arrays (L04: Scientific Computing in Python)
4.7 Reshaping NumPy Arrays (L04: Scientific Computing in Python)
  • 2020.09.21
  • www.youtube.com
Sometimes, our arrays just don't have the right shape. In this video, I will show you how we can manipulate the axes of an array to get it into the required ...
 

4.8 NumPy Comparison Operators and Masks (L04: Scientific Computing in Python)



4.8 NumPy Comparison Operators and Masks (L04: Scientific Computing in Python)

In NumPy, comparison operators and selection masks offer a lot of flexibility and can be quite enjoyable to work with. In a previous video, we introduced masks and comparison operators, but now let's explore some additional tricks that you can use when working with them.

Let's start with a simple example. Suppose we have a NumPy array [1, 2, 3, 4] for simplicity. We can define a mask to select certain values from the array. This mask will be a Boolean array, meaning it will contain True or False values. We can create the mask by specifying a condition, such as selecting values that are greater than two. The resulting mask array will have the same shape as the original array, with True values indicating the positions where the condition is true, and False values indicating the positions where the condition is false.

In Python, there is a handy relationship between Boolean values and integers: True is equivalent to 1, and False is equivalent to 0. This relationship allows us to perform interesting operations. For example, we can use the if statement to check if a condition is true by simply writing if condition:. We can also use the not operator to check if a condition is false by writing if not condition:. These approaches provide more readable code compared to explicitly comparing the condition with True or False.

Another useful feature is the ability to count the number of elements in an array that match a certain condition. By applying the sum operator to a mask, we can count the number of True values in the mask. For example, if we have a mask that selects values greater than two, we can count the number of such values by calling sum(mask). Similarly, we can count the number of False values by subtracting the sum from the total number of elements in the array.

To count the number of negative values in an array, we can utilize the NumPy invert function, which flips the Boolean values in the mask. By applying invert to a mask and then calling sum, we can count the number of False values (which now represent the negative values).

Binarizing an array, i.e., converting it to a binary representation, is another common operation. We can achieve this by assigning a specific value to the positions where a condition is true and another value to the positions where the condition is false. However, typing out the entire operation can be tedious. Fortunately, NumPy provides the where function, which simplifies this process. The where function takes a condition, and for the positions where the condition is true, it assigns the first value, and for the positions where the condition is false, it assigns the second value. Using where, we can easily binarize an array with just one line of code.

In addition to comparison operators, NumPy offers logical operators such as and, or, xor, and not. These operators can be combined with masks to create more complex conditions. For example, we can select values that are greater than three or smaller than two by using the or operator. By combining multiple conditions using logical operators, we can create intricate selection masks that suit our needs.

These Boolean masks, logical operators, and comparison operators in NumPy are incredibly useful when working with data sets and implementing decision tree rules. We will explore these concepts further in upcoming videos. In the next video, we will delve into basic linear algebra concepts in NumPy. Stay tuned!

4.8 NumPy Comparison Operators and Masks (L04: Scientific Computing in Python)
4.8 NumPy Comparison Operators and Masks (L04: Scientific Computing in Python)
  • 2020.09.22
  • www.youtube.com
Using comparison operators and selection masks goes hand in hand with fancy indexing in NumPy. This video explains how we can select relevant data convenient...
 

4.9 NumPy Linear Algebra Basics (L04: Scientific Computing in Python)



4.9 NumPy Linear Algebra Basics (L04: Scientific Computing in Python)

In this video, I would like to delve into some fundamental concepts of linear algebra, specifically in the context of NumPy. Although we won't extensively utilize linear algebra in this course, it is crucial to grasp basic operations like vector dot products and matrix multiplication. As I mentioned earlier, employing linear algebra notation enables us to write code that is more efficient and concise.

Let's begin by considering a one-dimensional array as a row vector. Alternatively, we can define it as a vector that consists of a single row with multiple elements. On the other hand, a column vector can be created by reshaping the row vector to have one column and multiple elements. Essentially, it represents the column vector representation. Notably, the use of square brackets is unnecessary in this context.

Instead of reshaping the vector explicitly, we can achieve the same result by adding a new axis using NumPy's newaxis function. By adding two new axes, we can even create a 3D tensor. Another approach is to use the None keyword, which serves the same purpose as newaxis. These three methods, namely reshaping, newaxis, and None, all achieve the goal of adding an additional axis when necessary.

Moving on, we encounter basic linear algebra notation for matrix multiplication. In linear algebra, matrix multiplication is equivalent to computing multiple dot products. For instance, if we have the vectors [1, 2, 3] and [1, 2, 3], their dot product results in 14. Similarly, the dot product of [4, 5, 6] and [1, 2, 3] yields 32. In NumPy, we can perform matrix multiplication using the matmul function. Alternatively, the @ operator can be used for convenience. However, it's important to note that in linear algebra, we cannot multiply matrices and vectors directly. Nevertheless, we can consider a column vector as a matrix, specifically a 3x1 matrix. This approach enables us to multiply a matrix with a vector, which is not possible in strict linear algebra. Thus, NumPy offers more flexibility compared to traditional linear algebra.

Moreover, NumPy provides the dot function for matrix multiplication, which is widely recommended due to its efficient implementation on most machines. This function allows us to write code more conveniently, especially when dealing with row vectors. It serves as a shortcut or operator overloading for matrix multiplication in NumPy. It's worth noting that the dot function can handle various combinations of matrices and vectors, performing either dot products or matrix multiplication based on the input shapes.

Regarding performance, both the matmul and dot functions have similar speed. The choice between them might depend on the specific machine. Nonetheless, the dot function is generally favored in practice. In addition, the transpose operation plays a role similar to the transpose operation in linear algebra, effectively flipping the matrix. Instead of using the transpose function explicitly, we can utilize the T attribute for brevity.

While NumPy includes a matrix type for two-dimensional arrays, it is not commonly used within the NumPy community. Regular multidimensional arrays serve the purpose in most cases. The matrix type is limited to two dimensions and introduces unnecessary complexity. It is advisable to avoid using it unless specifically required.

Lastly, we briefly touch upon SciPy, an impressive library that encompasses a wide range of additional functionality beyond NumPy. This library contains numerous specialized algorithms for scientific computing, such as linear algebra operations, Fourier transforms, interpolation techniques, optimization algorithms, statistical functions, and more. While it is based on NumPy, SciPy serves as an extension, providing specialized tools for various scientific computations. In this course, we will explore specific algorithms within SciPy as the need arises. You need not memorize all the details; I will introduce and explain relevant algorithms as we encounter them.

With this, we conclude our discussion on NumPy and SciPy for scientific computing in Python. In the next video, we will continue our scientific computing journey by exploring matplotlib, a powerful plotting library.

4.9 NumPy Linear Algebra Basics (L04: Scientific Computing in Python)
4.9 NumPy Linear Algebra Basics (L04: Scientific Computing in Python)
  • 2020.09.22
  • www.youtube.com
At its core, NumPy is an array library that implements tensors (including vectors and matrices) for linear algebra. After covering the basics of NumPy, this ...
 

4.10 Matplotlib (L04: Scientific Computing in Python)



4.10 Matplotlib (L04: Scientific Computing in Python)

Finally, we have reached the end of lecture four, which has been quite lengthy. However, I hope that the concepts discussed about NumPy have been valuable to you. In the future, we will be utilizing NumPy extensively in our homework assignments for implementing machine learning algorithms. Therefore, it is crucial for you to become proficient and familiar with NumPy at this point.

Moving on to the last topic of lecture four, we will explore matplotlib, which is a popular plotting library for Python. Although there are several plotting libraries available nowadays, matplotlib remains the most widely used one. Personally, it is also my favorite plotting library, and its name is inspired by Metalab. The syntax of matplotlib is quite similar to MATLAB, which some people appreciate while others do not. I, for instance, disliked using MATLAB during my time in grad school, but I find matplotlib to be a great tool.

Even if you are not a fan of MATLAB, I believe matplotlib is relatively easy to use. Moreover, it integrates smoothly with NumPy, which is an added advantage. So, let's get started with matplotlib. I should mention that personally, I don't memorize all the special ways to accomplish tasks in matplotlib because it is a low-level library. This means that it provides a high level of customization options, but not all of them are intuitive. Hence, I often find myself looking things up. When I need to do something specific, I visit the matplotlib gallery, which showcases various examples. For instance, if I want to create a stem plot, I simply search for it in the gallery, find the example, and adapt it to my data. This approach is usually sufficient for my needs. However, if you prefer more detailed tutorials, you can also explore the matplotlib.org website, which offers explanatory tutorials on different aspects of matplotlib.

To begin with, when working with matplotlib in Jupyter Lab or Jupyter Notebooks, you can use the inline function to display plots within the notebook itself. This means that the plots will be shown directly in the notebook, avoiding the need for a separate window. While there are alternative ways to achieve this, I personally recommend using the inline approach as it is more reliable across different computers. To activate the inline mode, you can use the following magic command: %matplotlib inline. Alternatively, you can add a semicolon at the end of your plot statements, which usually achieves the same result. However, it is advisable to use plt.show() to display the plots, as the semicolon trick may not work well on certain computers.

Now let's dive into creating some simple plots using matplotlib. For instance, we can start by plotting a sine curve. To do this, we can use the np.linspace function to generate 100 values ranging from zero to ten, and then plot these values against np.sin, which is the sine function. The simplest way to create a plot is by using the plt.plot function, where plt is the abbreviation for matplotlib.pyplot. We can adjust the axis ranges of the plot using the plt.xlim and plt.ylim functions to set the limits for the x-axis and y-axis, respectively. Furthermore, we can add labels to the x-axis and y-axis using the plt.xlabel and plt.ylabel functions. Finally, to display the plot, we can use the plt.show() function or add a semicolon at the end of the plot statements to suppress unwanted output.

In addition to a single plot, we can also create multiple plots within the same figure. For example, we can plot both a sine curve and a cosine curve in separate subplots. To achieve this, we can create two figures using the plt.subplots function and then plot the respective sine and cosine curves in each subplot. The plt.subplots function returns a figure object and an array of axes objects, which we can use to customize each subplot individually.

Here's an example code snippet that demonstrates the creation of multiple subplots:

import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(0, 10, 100)
y1 = np.sin(x)
y2 = np.cos(x)

fig, axs = plt.subplots(2, 1)  # Create a figure with 2 subplots arranged vertically

# Plot the sine curve in the first subplot
axs[0].plot(x, y1)
axs[0].set_title('Sine Curve')  # Set a title for the subplot
axs[0].set_xlabel('X')
axs[0].set_ylabel('Y')

# Plot the cosine curve in the second subplot
axs[1].plot(x, y2)
axs[1].set_title('Cosine Curve')
axs[1].set_xlabel('X')
axs[1].set_ylabel('Y')

plt.tight_layout()  # Adjust the spacing between subplots for better readability
plt.show()  # Display the figure
In this example, we use the plt.subplots function to create a figure with 2 subplots arranged vertically (2, 1). The function returns a figure object fig and an array of axes objects axs with dimensions matching the specified subplot layout. We can access each subplot by indexing the axs array.

Inside the subplot-specific code blocks, we use the plot function to plot the respective curves, and then customize each subplot's title, x-axis label, and y-axis label using the set_title, set_xlabel, and set_ylabel functions, respectively.

The tight_layout function is called to adjust the spacing between subplots, ensuring better readability. Finally, we use plt.show() to display the figure containing the subplots.

You can try running this code in your Jupyter Notebook or Jupyter Lab environment to see the resulting figure with the sine and cosine curves displayed in separate subplots.

This is just a basic example of creating subplots, and there are many more customization options available in matplotlib to make your plots more informative and visually appealing. You can explore the matplotlib documentation and gallery for further examples and detailed explanations.

4.10 Matplotlib (L04: Scientific Computing in Python)
4.10 Matplotlib (L04: Scientific Computing in Python)
  • 2020.09.22
  • www.youtube.com
In this video, I am showing you how to use my favorite plotting library to plot data in Python.Jupyter notebook: https://github.com/rasbt/stat451-machine-lea...
 

5.1 Reading a Dataset from a Tabular Text File (L05: Machine Learning with Scikit-Learn)



5.1 Reading a Dataset from a Tabular Text File (L05: Machine Learning with Scikit-Learn)

Hello everyone! I hope you all had a great week and had the chance to work through all the NumPy material. This week, we will be focusing on data processing and machine learning with scikit-learn, so it's essential to have a good understanding of NumPy. I believe it's incredibly useful to practice coding and apply the concepts we learn in real-life examples, which is why we will be doing some coding upfront in this lecture. It will benefit us later in the class when we extensively use these tools. Speaking of which, there isn't much else to add for this lecture, except that I've uploaded the first big homework assignment, which will test you on the concepts we covered in earlier lectures, including supervised learning and code examples using NumPy. It's a great opportunity to get hands-on experience with the K-nearest neighbor algorithm and explore NumPy and scikit-learn further.

Now, while you dive into the videos, complete the homework, and take the self-assessment quiz, I want to remind you to have some fun and enjoy yourself. Fall, my favorite season, has just started here in Wisconsin, and I love the colder weather and the beautiful colors of the changing leaves. By the way, I'm really excited because I already went to a pumpkin patch last weekend and got some pumpkins that I can't wait to carve for Halloween. So, let's get started with the lecture so I can get back to my little pumpkins and prepare them for Halloween.

Alright, we have now reached part three of the computational foundations lectures. In this lecture, we will cover several topics, starting with reading in a data set from a tabular text file, such as a CSV file, which is the most common file format for traditional machine learning tasks. We will then discuss basic data handling techniques, including shaping the data for machine learning algorithms and training procedures.

After that, we will dive into machine learning with scikit-learn. But before we do, I want to briefly recap Python classes and object-oriented programming. In earlier exercises, I asked you to prepare yourselves for Python or better understand its concepts. It's important to have a good grasp of object-oriented programming because scikit-learn heavily relies on it. So, understanding object-oriented programming is necessary to comprehend how scikit-learn works.

Moving on, we will discuss preparing training data using the scikit-learn transformer API. We will also cover defining scikit-learn pipelines, which help us chain different operations, such as data set preparation, scaling, normalization, dimensionality reduction, and the classifier itself. By using pipelines, we can create efficient training workflows that connect various aspects of the machine learning process, making things more convenient. This is one of the significant strengths of scikit-learn.

For this lecture, I decided to use slides again. Although Jupiter Lab is a fantastic tool, I find it easier to explain certain concepts by annotating code examples with a pen or pencil. So, in these slides, I have captured screenshots from Jupiter Lab and Jupiter Notebook, which I will annotate during the lecture. However, I have also uploaded the entire code notebook to GitHub, where you can find additional explanations. Consider this document as optional course or lecture notes for your reference.

Let's quickly recap where we are in this course. We began with an introduction to machine learning, covered the basics, and explored how scikit-learn works. Then, we delved into Python, learning about NumPy and scientific computing. Now, we are entering the phase of data processing and machine learning with scikit-learn. In the next lecture, we will return to core machine learning concepts such as decision trees, ensemble methods, and model evaluation. Although this is the last part of the computational foundations lectures, it doesn't mean that it's the end of the course. After completing the computational foundations lectures, we will move on to more advanced topics in machine learning, including deep learning and neural networks.

Now, let's dive into the first topic of this lecture: reading in a data set from a tabular text file. When working with machine learning, it's common to have data stored in tabular formats such as CSV (Comma-Separated Values) files. These files contain rows and columns of data, with each row representing a sample or instance, and each column representing a feature or attribute.

To read in a CSV file in Python, we can use the Pandas library. Pandas provides powerful data manipulation and analysis tools, making it a popular choice for working with tabular data in Python. Let's take a look at an example:

import pandas as pd

# Read the CSV file into a DataFrame
data = pd.read_csv('data.csv')

# Display the first few rows of the DataFrame
print(data.head())

In this example, we first import the pandas library and alias it as pd for convenience. Then, we use the read_csv() function to read the CSV file data.csv into a DataFrame, which is a two-dimensional tabular data structure provided by Pandas. The DataFrame is stored in the variable data.

After reading the data, we can use the head() function to display the first few rows of the DataFrame. This allows us to quickly inspect the data and verify that it was read correctly.

Pandas provides a wide range of functions and methods to manipulate and analyze data. We can perform various operations such as filtering rows, selecting columns, aggregating data, and much more. If you're new to Pandas, I encourage you to explore its documentation and experiment with different operations on your own.

Now that we know how to read in data, let's move on to the next topic: basic data handling techniques. When working with data for machine learning, it's essential to preprocess and prepare the data appropriately. This includes tasks such as handling missing values, encoding categorical variables, scaling numerical features, and splitting the data into training and testing sets.

One common preprocessing step is handling missing values. Missing values are often represented as NaN (Not a Number) or NULL values in the data. These missing values can cause issues when training machine learning models, so we need to handle them appropriately. Pandas provides several functions to handle missing values, such as isna() to check for missing values, fillna() to fill missing values with a specified value, and dropna() to remove rows or columns with missing values.

Encoding categorical variables is another important step. Machine learning models typically work with numerical data, so we need to convert categorical variables into a numerical representation. One common encoding technique is one-hot encoding, where we create binary columns for each category and indicate the presence or absence of a category with a 1 or 0, respectively.

import pandas as pd

# Create a DataFrame with categorical variables
data = pd.DataFrame({'color': ['red', 'blue', 'green', 'red', 'green']})

# Perform one-hot encoding
encoded_data = pd.get_dummies(data)

# Display the encoded data
print(encoded_data)
In this example, we create a DataFrame with a column 'color' containing categorical variables. We then use the get_dummies() function from Pandas to perform one-hot encoding. The resulting encoded data contains binary columns for each unique category in the original 'color' column.

Scaling numerical features is another common preprocessing step. Many machine learning algorithms are sensitive to the scale of the features. If the features have different scales, it can affect the performance of the model. To address this, we can scale the features to a standard range, such as 0 to 1 or -1 to 1. Pandas provides the MinMaxScaler and StandardScaler classes in the sklearn.preprocessing module to perform feature scaling.

import pandas as pd
from sklearn.preprocessing import MinMaxScaler

# Create a DataFrame with numerical features
data = pd.DataFrame({'age': [25, 30, 35, 40], 'income': [50000, 60000, 70000, 80000]})

# Perform feature scaling using MinMaxScaler
scaler = MinMaxScaler()
scaled_data = scaler.fit_transform(data)

# Convert the scaled data back to a DataFrame
scaled_df = pd.DataFrame(scaled_data, columns=data.columns)

# Display the scaled data
print(scaled_df)
In this example, we create a DataFrame with two numerical features: 'age' and 'income'. We then use the MinMaxScaler class from the sklearn.preprocessing module to perform feature scaling. The fit_transform() method scales the data, and the resulting scaled data is stored in the scaled_data variable as a NumPy array. Finally, we convert the scaled data back to a DataFrame and display it.

Lastly, splitting the data into training and testing sets is crucial for evaluating the performance of machine learning models. We typically split the data into two sets: a training set used to train the model and a testing set used to evaluate its performance. Pandas provides the train_test_split() function in the sklearn.model_selection module to split the data into training and testing sets.

import pandas as pd
from sklearn.model_selection import train_test_split

# Read the CSV file into a DataFrame
data = pd.read_csv('data.csv')

# Split the data into features and labels
X = data.drop('label', axis=1)
y = data['label']

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
In this example, we first read the CSV file 'data.csv' into a DataFrame. Then, we split the data into features X and labels y, where X contains all columns except the 'label' column, and y contains only the 'label' column.

Next, we use the train_test_split() function to split the data into training and testing sets. We pass the features X and labels y, specify the desired test size (e.g., 0.2 for a 20% test set), and set a random state for reproducibility.

After splitting the data, we can use the training set (X_train and y_train) to train our machine learning model and evaluate its performance on the testing set (X_test and y_test).

These are some basic data handling techniques in machine learning using the Pandas library in Python. Remember, data preprocessing and preparation are essential steps in the machine learning pipeline, and there are many more techniques and tools available depending on the specific requirements of your project.

5.1 Reading a Dataset from a Tabular Text File (L05: Machine Learning with Scikit-Learn)
5.1 Reading a Dataset from a Tabular Text File (L05: Machine Learning with Scikit-Learn)
  • 2020.09.27
  • www.youtube.com
Machine learning begins with loading your data into a friendly array format. In this video, we will use pandas' read_csv function to get data into our active...
 

5.2 Basic data handling (L05: Machine Learning with Scikit-Learn)



5.2 Basic data handling (L05: Machine Learning with Scikit-Learn)

In the previous video, we discussed how to read a tabular text file as a dataset. Specifically, we focused on working with a CSV file and, more specifically, the Iris dataset. We imported the Iris dataset from a CSV file into a Pandas DataFrame.

In this video, we will delve into preparing the data in the appropriate format for machine learning using scikit-learn. We will explore basic data handling techniques using Pandas and NumPy to transform the data into a suitable format for machine learning. But before we proceed, let's briefly recap the concept of Python functions, as it will come in handy when we discuss transforming values in a Pandas DataFrame.

Here we have a simple Python function called "some_func." It takes a single input argument, "x," and converts it into a string. It then concatenates the converted value with the fixed string "hello world." If we provide an integer, such as 123, as the input, it will be converted to a string ("123") and concatenated with "hello world," resulting in the final string. This is a basic overview of how Python functions work, with a colon indicating the function's body and a return statement specifying the output. Although there can be multiple lines of code within the function, the return statement marks the end.

Another concept worth mentioning is lambda functions. Lambda functions are a shorthand way of defining small functions without explicitly naming them. They are commonly used when there is a need to save lines of code and write functions quickly. In the context of data transformations in Pandas columns, lambda functions are often used. While lambda functions offer a more concise syntax, they essentially perform the same operations as regular functions. They are especially useful when combined with the apply method on a Pandas DataFrame column.

In the previous lecture, we read the Iris dataset into a Pandas DataFrame from the CSV file. The Iris dataset consists of 150 rows, but we are only displaying the first five rows for brevity. The dataset includes an ID column, which is not essential, followed by the features represented by the design matrix X. We also have the class labels, typically denoted as y. Traditionally, scikit-learn and other libraries did not handle string variables as class labels, so it was common practice to convert them to integers. For example, "Iris setosa" would be converted to the integer 0, "Iris versicolor" to 1, and "Iris virginica" to 2. This conversion was necessary because many algorithms were designed to work with integer class labels rather than string labels.

However, scikit-learn now supports string class labels in most functions, eliminating the need for explicit conversion. Internally, the conversion is handled automatically. Nevertheless, some tools may not handle string data correctly, so it is still recommended to convert class labels to integers. By doing so, you ensure compatibility with various tools and reduce the likelihood of encountering errors.

To illustrate the conversion process, we will use the lambda function in conjunction with the apply method. By applying a lambda function to the species column of the DataFrame, we can convert the string class labels into integer labels. However, it is worth mentioning that using a mapping dictionary is often a better approach. It provides better readability and allows for easier interpretation of the class label transformations. Additionally, if you need to retrieve the original class labels later, you can define a reverse dictionary and use it to map the integer labels back to their original string representations.

To demonstrate the conversion, we reload the dataset to its original state. Then, instead of using apply, we utilize the map function to convert the string labels to integers using the mapping dictionary. We also showcase the use of the values attribute, which accesses the underlying NumPy array. Working with NumPy arrays.

Working with NumPy arrays can be beneficial for several reasons. NumPy arrays are more memory-efficient compared to Pandas DataFrames, making them ideal for large datasets. Additionally, many machine learning algorithms in scikit-learn expect input data to be in the form of NumPy arrays.

To convert our Pandas DataFrame into NumPy arrays, we can simply access the values attribute of the DataFrame. Let's see an example:

import pandas as pd
import numpy as np

# Reload the Iris dataset
iris_df = pd.read_csv('iris.csv')

# Convert the features (X) into a NumPy array
X = iris_df.drop(['species'], axis=1).values

# Convert the class labels (y) into a NumPy array
y = iris_df['species'].values
In this example, we use the drop method of the DataFrame to remove the 'species' column and obtain the features as a DataFrame. Then, by accessing the values attribute, we convert the features into a NumPy array and assign it to the variable X.

Similarly, we access the 'species' column of the DataFrame using the indexing operator [] and convert it into a NumPy array, assigning it to the variable y.

Now, the X variable contains the feature matrix as a NumPy array, and the y variable contains the class labels as a NumPy array. We can use these arrays as inputs for various machine learning algorithms.

Let's say we want to split our dataset into training and testing sets for model evaluation. Scikit-learn provides a utility function called train_test_split that makes this task easy. Here's an example:

from sklearn.model_selection import train_test_split

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
In this example, we import the train_test_split function from the sklearn.model_selection module. We pass the X and y arrays as arguments to the function, along with the test_size parameter specifying the proportion of the dataset to allocate for testing (in this case, 20%). The random_state parameter ensures reproducibility of the split.

After calling train_test_split, we obtain four arrays: X_train and y_train contain the training data, while X_test and y_test contain the testing data.

Now you can use the X_train and y_train arrays to train your machine learning model and evaluate its performance using the X_test and y_test arrays.

In summary, by converting the Pandas DataFrame into NumPy arrays, we can leverage the advantages of NumPy, such as memory efficiency and compatibility with machine learning algorithms. Additionally, scikit-learn provides convenient functions like train_test_split for splitting the data into training and testing sets.

5.2 Basic data handling (L05: Machine Learning with Scikit-Learn)
5.2 Basic data handling (L05: Machine Learning with Scikit-Learn)
  • 2020.09.27
  • www.youtube.com
Before we train a machine learning classifier, we have to make sure that the dataset looks right. This means, we want to make sure that the feature and label...
Reason: