
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Lecture 4. Eigenvalues and Eigenvectors
4. Eigenvalues and Eigenvectors
This video explains the concept of eigenvalues and eigenvectors, and how they can be used to calculate linear transformations. It also goes on to show how eigenvectors can be used to find linear equations in a system.
Lecture 5. Positive Definite and Semidefinite Matrices
5. Positive Definite and Semidefinite Matrices
In this video, the speaker summarizes the highlights from the previous lectures in linear algebra, including eigenvalues, determinants, and pivots, all of which provide tests for positive definite matrices. The speaker then explains the relationship between positive definite and indefinite matrices, their connection to eigenvalues and determinants, and how to compute the energy in the vector X for a matrix. The speaker also discusses the concepts of deep learning, neural nets, machine learning, and minimizing an energy. They touch on the concept of a convex function and explain how it can be used in deep learning. Finally, the speaker introduces exercises for positive definite and semidefinite matrices and briefly mentions the upcoming topic of singular value decomposition.
Lecture 6. Singular Value Decomposition (SVD)
6. Singular Value Decomposition (SVD)
This video explains the concept of Singular Value Decomposition (SVD), which is used to factorize a matrix into three matrices, where the middle one is diagonal and contains the singular values. The SVD helps understand the relationship between A, Sigma, and V, ultimately helping to solve equations. The video discusses the importance of orthogonal vectors, eigenvectors, and eigenvalues in SVD, and emphasizes the orthogonality of A and V matrices. The video also explains the graphical representation of the SVD process and the pole decomposition of a matrix. Finally, the video discusses the process of extracting the most important part of a big matrix of data using SVD.
Lecture 7. Eckart-Young: The Closest Rank k Matrix to A
7. Eckart-Young: The Closest Rank k Matrix to A
In this YouTube video, the lecturer explains the concept of principle component analysis (PCA), which is used for understanding a matrix of data and extracting meaningful information from it. The importance of the largest k singular values of a matrix, which contain the most crucial information, is highlighted, and the Eckart-Young theorem, which states that the first k pieces of a singular value decomposition provide the best approximation to a rank k matrix, is introduced. The speaker also discusses different types of norms for vectors and matrices, including the l2, l1, and infinity norms. The importance of the Frobenius norm in the Netflix competition and MRI scans is highlighted, along with the concept of the closest rank k matrix to A. The speaker also discusses the use of orthogonal matrices in preserving the properties of the original matrix and introduces the concept of Singular Value Decomposition (SVD) and how it relates to PCA. Lastly, the importance of solving a linear system of equations involving the rectangular matrix A and its transpose is discussed, along with the use of SVD method in finding the best ratio of age to height for a given dataset.
Lecture 8: Norms of Vectors and Matrices
Lecture 8: Norms of Vectors and Matrices
This lecture discusses the concept of norms of vectors and matrices, including L1 and max norms, and their application in fields such as compress sensing and signal processing. The lecture also covers the importance of the triangle inequality in norms, the shape of s-norms, and the connection between the L2 norm of vectors and matrices. Additionally, the lecture explores the Frobenius norm and the nuclear norm, which remains a conjecture for optimizing neural nets, and emphasizes the importance of teaching and learning alongside students.
Lecture 9. Four Ways to Solve Least Squares Problems
9. Four Ways to Solve Least Squares Problems
In this video, the instructor discusses the concept of least squares and various ways to approach it. He emphasizes the importance of least squares, as it's an essential problem in linear algebra and serves as the glue that holds the entire course together. The video covers the pseudo-inverse of matrices, SVD of invertible and non-invertible matrices, and different methods to solve least squares problems, including Gauss's plan and orthogonal columns. The video also discusses the idea of minimizing the distance between ax + b and the actual measurements using the L2 norm squared and how it relates to linear regression and statistics. Additionally, the video provides insight into a project that uses the material learned in the course, focusing on areas like machine learning and deep learning.
Lecture 10: Survey of Difficulties with Ax = b
Lecture 10: Survey of Difficulties with Ax = b
In this lecture on numerical linear algebra, the difficulties with solving linear equations of the form Ax=b are discussed. These difficulties arise when the matrix A is nearly singular, making its inverse unreasonably large, and when the problem is too big with a giant matrix that is impossible to solve in a feasible time. The lecturer outlines several possibilities for solving the problem, ranging from the easy normal case to the extremely difficult case of underdetermined equations. The use of randomized linear algebra, iterative methods, and the SVD are discussed, along with the importance of finding solutions that work on test data, particularly with deep learning. Additionally, the lecturer emphasizes that the SVD is still the best tool for diagnosing any matrix issues.
Lecture 11: Minimizing ‖x‖ Subject to Ax = b
Lecture 11: Minimizing ‖x‖ Subject to Ax = b
In this lecture, the speaker covers a range of topics related to numerical linear algebra. They start with discussing the issues that can arise when solving for Ax=b, then move onto the Gram-Schmidt process for finding an orthogonal basis for a space, and the modified Gram-Schmidt method for minimizing ‖x‖ subject to Ax = b. The speaker also introduces the concept of column exchange or column pivoting in a more professional Gram-Schmidt algorithm and discusses an improvement on the standard Gram-Schmidt process for orthonormalizing the columns of a matrix A. They also touch upon the idea of the Krylov space to solve the problem Ax=b and the importance of having a good basis for minimizing ‖x‖ subject to Ax = b. Finally, they mention that they have finished with the problem of minimizing x subject to Ax=b and are moving on to tackling the issue of dealing with very large matrices.
Lecture 12. Computing Eigenvalues and Singular Values
12. Computing Eigenvalues and Singular Values
In this video, the QR method for computing eigenvalues and singular values is introduced. The process involves starting with the desired matrix and factoring it into QR, creating an upper triangular matrix R that connects the non-orthogonal basis with the orthogonal basis. The process is iterated until the diagonal entries become small, at which point they can be used to approximate the eigenvalues. The speaker also discusses a shift method for computing eigenvectors to speed up the process. The benefits of using MATLAB for symmetric matrices are also highlighted. The video also touches upon the concept of Krylov vectors for solving eigenvalue problems for large matrices.
Lecture 13: Randomized Matrix Multiplication
Lecture 13: Randomized Matrix Multiplication
This video lecture discusses the concept of randomized matrix multiplication, which involves sampling the columns of matrix A and the corresponding rows of matrix B with probabilities that add up to one. The mean value of the random samples can be computed to obtain the correct answer, but there will still be variance. The lecture goes on to discuss the concepts of mean and variance and how to pick the best probabilities that minimize the variance. The process involves introducing an unknown variable called Lambda and taking derivatives with respect to it to find the best PJ. The focus then shifts to the question of how to weight the probabilities when looking at which columns in a matrix are larger or smaller. The lecturer suggests two possibilities: weight probabilities according to the norm squared or mix the columns of the matrix and use equal probabilities. Overall, the video provides a detailed explanation of randomized matrix multiplication and the process for optimizing probabilities to obtain the smallest variance.