
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Lecture 2: Image Formation, Perspective Projection, Time Derivative, Motion Field
Lecture 2: Image Formation, Perspective Projection, Time Derivative, Motion Field
In this lecture, the concept of perspective projection and its relationship with motion are discussed extensively. The lecturer demonstrates how the use of differentiation of the perspective projection equation can help measure motion of brightness patterns in the image and how it relates to motion in the real world. The lecture also covers topics such as the focus of expansion, continuous and discrete images, and the importance of having a reference point for texture when estimating an object's velocity in an image. Additionally, the lecture touches on total derivatives along curves and the issue of equation counting and constraints when trying to recover the optical flow vector field.
The speaker covers various topics such as brightness gradient, motion of an object, the 2D case, and isophotes. One challenge faced in computing an object's velocity is the aperture problem caused by the brightness gradient's proportional relationship, which is resolved by either weighting contributions to different image regions or searching for minimum solutions. The lecture then delves into the different cases of isophotes and emphasizes the importance of computing a meaningful answer as opposed to a noisy one when determining velocity, using the concept of noise gain, which measures the sensitivity of change in the image to the change in result.
Lecture 3: Time to Contact, Focus of Expansion, Direct Motion Vision Methods, Noise Gain
Lecture 3: Time to Contact, Focus of Expansion, Direct Motion Vision Methods, Noise Gain
In this lecture, the concept of noise gain is emphasized as it relates to machine vision processes, with a focus on different directions and variations in accuracy. The lecturer discusses the importance of accurately measuring vectors and understanding gain to minimize errors in calculations. The talk covers the concept of time to contact, the focus of expansion, and motion fields, with a demonstration of how to compute radial gradients to estimate time-to-contact. The lecturer also demonstrates how to overcome limitations in frame-by-frame calculations using multi-scale superpixels, with a live demonstration using a web camera. Overall, the lecture provides useful insights into the complexities of machine vision processes and how to measure various quantities accurately.
The lecture discusses various aspects of motion vision and their application in determining time to contact, focus of expansion, and direct motion vision methods. The speaker demonstrates tools for visualizing intermediate results, but also acknowledges their limitations and errors. Additionally, the problem of dealing with arbitrary motions in image processing is tackled, and the importance of neighboring points moving at similar velocities is emphasized. The lecture also delves into the patterns affecting the success of direct motion vision methods and introduces new variables to define time to contact and foe more conveniently. Finally, the process of solving three linear equations and three unknowns to understand how different variables affect motion vision is discussed, along with the parallelization of the process to speed up computation.
Lecture 4: Fixed Optical Flow, Optical Mouse, Constant Brightness Assumption, Closed Form Solution
Lecture 4: Fixed Optical Flow, Optical Mouse, Constant Brightness Assumption, Closed Form Solution
In Lecture 4 of the course on visual perception for autonomy, the lecturer discusses topics such as fixed optical flow, optical mouse, constant brightness assumption, closed form solution, and time to contact. The constant brightness assumption leads to the brightness change constraint equation, which relates movement in the image with brightness gradient and rate of change of brightness. The lecturer also demonstrates how to model situations where the camera or the surface is tilted, and discusses the benefit of multi-scale averaging in handling large motions. Additionally, the lecture explores the use of time to contact in various autonomous situations and compares different control systems for landing in planetary spacecraft. Finally, the lecture touches on the projection of a line and how it can be defined using perspective projection.
The speaker discusses the applications of image processing, including how vanishing points can be used to recover the transformation parameters for camera calibration and how calibration objects with known shapes can determine the position of a point in the camera-centric system. The lecture also covers the advantages and disadvantages of using different shapes as calibration objects for optical flow algorithms, such as spheres and cubes, and how to find the unknown center of projection using a cube and three vectors. The lecture ends by highlighting the importance of taking radial distortion parameters into account for real robotics camera calibration.
Lecture 5: TCC and FOR MontiVision Demos, Vanishing Point, Use of VPs in Camera Calibration
Lecture 5: TCC and FOR MontiVision Demos, Vanishing Point, Use of VPs in Camera Calibration
The lecture covers various topics related to camera calibration, including the use of vanishing points in perspective projection, triangulation to find the center of projection and principal point in image calibration, and the concept of normal matrices for representing rotation in an orthonormal matrix. The lecturer also explains the mathematics of finding the focal length of a camera and how to use vanishing points to determine the orientation of a camera relative to a world coordinate system. Additionally, the use of TCC and FOR MontiVision Demos is discussed, along with the importance of understanding the geometry behind equations in solving problems.
The lecture covers various topics related to computer vision, including the influence of illumination on surface brightness, how matte surfaces can be measured using two different light source positions, and the use of albedo to solve for the unit vector. The lecture also discusses the vanishing point in camera calibration and a simple method to measure brightness using three independent light source directions. Lastly, the speaker touches on orthographic projection as an alternative to perspective projection and the conditions necessary for using it in surface reconstruction.
Lecture 6: Photometric Stereo, Noise Gain, Error Amplification, Eigenvalues and Eigenvectors Review
Lecture 6: Photometric Stereo, Noise Gain, Error Amplification, Eigenvalues and Eigenvectors Review
Throughout the lecture, the speaker explains the concepts of noise gain, eigenvalues, and eigenvectors when solving systems of linear equations in photometric stereo. The lecture discusses the conditions for singular matrices, the relevance of eigenvalues in error analysis, and the importance of linear independence in avoiding singular matrices. The lecture concludes with a discussion of Lambert's Law and surface orientation, and highlights the need to represent surfaces using a unit normal vector or points on a unit sphere. Overall, the lecture provides insight into the mathematical principles underlying photometric stereo and highlights the challenges of accurately recovering the topography of the moon from earth measurements.
In Lecture 6 of a computational photography course, the speaker discusses how to use the unit normal vector and the gradients of a surface to find surface orientation and plot brightness as a function of surface orientation. They explain how to use the p-q parameterization to map possible surface orientations and show how a slope plane can be used to plot brightness at different angles of orientation. The speaker also discusses how to rewrite the dot product of the unit vector of the light source and the unit normal vector in terms of the gradients to find the curves in pq space where that quantity is constant. The lecture ends with an explanation of how cones created by spinning the line to the light source can be used to find conic sections of different shapes.
Lecture 7: Gradient Space, Reflectance Map, Image Irradiance Equation, Gnomonic Projection
Lecture 7: Gradient Space, Reflectance Map, Image Irradiance Equation, Gnomonic Projection
This lecture discusses gradient space, reflectance maps, and image irradiance equations. The lecturer explains how to use a reflectance map to determine surface orientation and brightness for graphics applications, and how to create a numerical mapping from surface orientation to brightness using three pictures taken under different lighting conditions. They also introduce the concept of irradiance and its relationship to intensity and radiance, as well as the importance of using a finite aperture when measuring brightness. Additionally, the lecture touches on the three rules of how light behaves after passing through a lens, the concept of foreshortening, and how the lens focuses rays to determine how much of the light from a patch on the surface is concentrated into the image.
In this lecture, the speaker explains the equation for determining the total power delivered to a small area in an image, which takes into account solid angles and cosine theta. They relate this equation to the f-stop in cameras and how aperture size controls the amount of light received. The speaker also discusses image irradiance, which is proportional to the radiance of objects in the real world, and how brightness drops off as we go off-axis. They move on to discuss the bi-directional reflectance distribution function, which determines how bright a surface will appear depending on the incident and emitted direction. The lecturer explains that reflectance can be measured using a goniometer and that realistically modeling how an object reflects light is important. They also explain the concept of the Helmholtz reciprocity for the bi-directional reflectance distribution function. The lecture then moves on to discuss applying gradient space to surface material models and reminds students to keep updated on homework information.
Lecture 8: Shading, Special Cases, Lunar Surface, Scanning Electron Microscope, Green's Theorem
Lecture 8: Shading, Special Cases, Lunar Surface, Scanning Electron Microscope, Green's Theorem
In this lecture, the professor covers several topics related to photometry and shading. He explains the relationship between irradiance, intensity, and radiance and how they are measured and related. The lecture also introduces the bi-directional reflectance distribution function (BRDF) to explain how illumination affects the orientation and material of a surface. The lecturer further discusses the properties of an ideal lambertian surface and its implications for measuring incoming light and avoiding confusion when dealing with Helmhotz reciprocity. The lecture also covers the process of converting from gradient to unit vector and how it relates to the position of the light source. Finally, the lecture explains how measuring brightness can determine a surface's steepness or slope direction.
The lecture covers various topics related to optics and computer vision. The professor discusses using shape from shading techniques to obtain a profile of an object's surface to determine its shape. He then switches to discussing lenses and justifies the use of orthographic projection. The lecturer also talks about removing perspective projection in machine vision by building telecentric lenses and demonstrates various tricks to compensate for aberrations due to glass's refractive index variation with wavelengths. Finally, the speaker introduces the concept of orthographic projection, which simplifies some of the problems associated with perspective projection.
Lecture 9: Shape from Shading, General Case - From First Order Nonlinear PDE to Five ODEs
Lecture 9: Shape from Shading, General Case - From First Order Nonlinear PDE to Five ODEs
This lecture covers the topic of shape from shading, a method for interpreting the shapes of objects using variations in image brightness. The lecturer explains the process of scanning electron microscopy, where a secondary electron collector is used to measure the fraction of an incoming electron beam that makes it back out, allowing for the estimation of surface slope. The lecture also discusses the use of contour integrals, moments, and least squares to estimate surface derivatives and find the smallest surface given measurement noise. The speaker derives five ordinary differential equations for the shape from shading problem and also explains the concept of the Laplacian operator, which is used in image processing operations.
In this lecture on "Shape from Shading," the speaker discusses various approaches to solve equations for the least square solution to shape from shading. The lecturer explains different techniques to satisfy the Laplacian condition, adjust pixel values, and reconstruct surfaces using image measurements and slope computations from different points. The lecture covers the topics of initial values, transform of rotating, and inverse transform through minus theta. The lecturer concludes with a discussion of the generalization of these equations for arbitrary reflectance maps and the importance of examining scanning electron microscope images to provide concrete examples of shading interpretation.
Lecture 10: Characteristic Strip Expansion, Shape from Shading, Iterative Solutions
Lecture 10: Characteristic Strip Expansion, Shape from Shading, Iterative Solutions
In this lecture, the instructor covers the topic of shape from shading using brightness measurements in the concept of image formation. This involves understanding the image irradiance equation, which relates brightness to surface orientation, illumination, surface material, and geometry. They explain the method of updating p and q variables by using two separate systems of equations that feed into each other, and tracing out a whole strip using the brightness gradient. The lecture also discusses the challenges of solving for first-order non-linear PDEs, and different methods of stepping from one contour to another as you explore the surface. Finally, the instructor discusses the implementation of the characteristic strip expansion and why a sequential approach may not be the best method, recommending parallelization and controlling the step size.
In Lecture 10, the professor discusses various methods for solving shape-from-shading problems, including using stationary points on the surface and constructing a small cap shape around it to estimate the local shape. The lecturer also introduces the concept of the occluding boundary, which can provide starting conditions for solutions, and discusses recent progress in computing solutions for the three-body problem using sophisticated numerical analysis methods. Additionally, the lecture touches on the topic of industrial machine vision methods and the related patterns that will be discussed in the following lecture.
Lecture 11: Edge Detection, Subpixel Position, CORDIC, Line Detection (US patent 6408109)
Lecture 11: Edge Detection, Subpixel Position, CORDIC, Line Detection (US patent 6408109)
This YouTube video titled "Lecture 11: Edge Detection, Subpixel Position, CORDIC, Line Detection (US 6,408,109)" covers several topics related to edge detection and subpixel location in machine vision systems. The speaker explains the importance of patents in the invention process and how they are used in patent wars. They also discuss various edge detection operators and their advantages and limitations. The video includes detailed explanations of the mathematical formulas used to convert Cartesian coordinates to polar coordinates and determine edge position. The video concludes by discussing the importance of writing broad and narrow claims for patents and the evolution of patent law over time.
In Lecture 11, the speaker focuses on different computational molecules for edge detection and derivative estimation, with an emphasis on efficiency. Sobel and Roberts Cross operators are presented for calculating the sum of the squares of gradients, with variations in formula and technique discussed. To achieve subpixel accuracy, multiple operators are used, and techniques such as fitting a parabola or using a triangle model are presented to determine the peak of the curve. Additionally, the lecture discusses alternatives to quantization and issues with gradient direction on a square grid. Overall, the lecture stresses the importance of considering many details to achieve good performance for edge detection.