Machine Learning and Neural Networks - page 10

 

Geoffrey Hinton and Yann LeCun, 2018 ACM A.M. Turing Award Lecture "The Deep Learning Revolution"


Geoffrey Hinton and Yann LeCun, 2018 ACM A.M. Turing Award Lecture "The Deep Learning Revolution"

Geoffrey Hinton and Yann LeCun won the 2018 ACM A.M. Turing Award and delivered a lecture on the deep learning revolution.
In the lecture, they discussed how deep learning has revolutionized computer science and how it can be used to benefit various aspects of life. They also talked about the challenges of deep learning and the future of the field.
They noted that while theoretical understanding of deep learning is important, it is still up to humans to make decisions in complex situations. They also discussed the potential for evolutionary computation and other forms of artificial intelligence in autonomous driving.

  • 00:00:00 Geoffrey Hinton and Yann LeCun deliver the 2018 ACM A.M. Turing Award Lecture, talking about the deep learning revolution. They discuss how deep learning has revolutionized computer science, and how attendees can benefit from attending related conferences.

  • 00:05:00 The three recipients of the 2018 ACM A.M. Turing Award are Geoffrey Hinton, Yoshua Bengio, and Yann LeCun. Hinton will give a history lecture on the development of deep learning, while Kuhn will discuss the continued progress of deep learning.

  • 00:10:00 In his 2018 ACM A.M. Turing Award Lecture, Geoffrey Hinton discusses the deep learning revolution, which is based on the biologically-inspired approach to artificial intelligence. The deep learning approach is more efficient and effective than the traditional symbolic paradigm, but it is also more difficult to learn.

  • 00:15:00 In his 2018 ACM A.M. Turing Award Lecture, Geoffrey Hinton discussed how neural networks work, explaining that they are a simplified version of the actual neurons in the brain. He also explained how backpropagation is an efficient version of the reinforcement learning algorithm, and how it can speed up the process of training neural networks by a factor of 10 or more.

  • 00:20:00 Geoffrey Hinton and Yann LeCun delivered a lecture on the deep learning revolution at the ACM Turing Award ceremony. The two discuss back propagation, stochastic gradient descent, and how it has not been successful in large scale learning.

  • 00:25:00 In this lecture, Geoffrey Hinton and Yann LeCun discuss the deep learning revolution, which involved the development of more efficient neural networks. With the increase in compute power available in recent years, neural networks have become increasingly powerful and are now ubiquitous in various fields of computer vision.

  • 00:30:00 Geoffrey Hinton and Yann LeCun delivered a talk on the history of deep learning and its current state, highlighting the successes and challenges of the field. They also talked about the future of computer vision, highlighting the importance of deep learning in achieving better results.

  • 00:35:00 In his 2018 ACM A.M. Turing Award Lecture, Geoffrey Hinton discusses the deep learning revolution and its importance for artificial intelligence. He notes that while deep learning is very effective at achieving specific tasks, it is not the best way to do vision. Hinton suggests that one aspect of the deep learning revolution that has been replicated in the brain is the use of replicated apparatus. He demonstrates this by having a participant point to the corners of a cube that is rotated so that the top back left-hand corner is vertically above the front bottom right-hand corner. Hinton explains that while deep learning is effective at using a specific set of weights to approximate a desired output, it is not effective at preserving the symmetries of objects. He predicts that in the future, neural networks will be able to learn to recognize objects using a different time scale, which will be analogous to how synapses change in the brain.

  • 00:40:00 In his 2018 ACM A.M. Turing Award Lecture, Geoffrey Hinton discusses the deep learning revolution, which he believes is due to the gradual introduction of new timescales into the learning process. He discusses how the memory of past learning is stored in the weights of a neural network, and how this memory can be accessed using fast weights. Hinton also talks about the impact of big data on deep learning, and how recent advances in computer hardware and software have made deep learning more accessible to researchers.

  • 00:45:00 The Geoffrey Hinton and Yann LeCun lecture covered the deep learning revolution, how hierarchical representations are helpful, and how the Pennebaker parenting Network works.

  • 00:50:00 Geoffrey Hinton and Yann LeCun delivered the 2018 ACM A.M. Turing Award lecture discussing deep learning and its potential for revolutionizing various aspects of life. Their work on image segmentation and self-driving cars was among the most notable.

  • 00:55:00 Geoffrey Hinton and Yann LeCun gave a lecture on the deep learning revolution, discussing how humans and animals are able to learn efficiently so quickly. They also discussed how humans and animals learn concepts by observing and predicting the world.
Geoffrey Hinton and Yann LeCun, 2018 ACM A.M. Turing Award Lecture "The Deep Learning Revolution"
Geoffrey Hinton and Yann LeCun, 2018 ACM A.M. Turing Award Lecture "The Deep Learning Revolution"
  • 2019.06.23
  • www.youtube.com
We are pleased to announce that Geoffrey Hinton and Yann LeCun will deliver the Turing Lecture at FCRC. Hinton's talk, entitled, "The Deep Learning Revoluti...
 

This Canadian Genius Created Modern AI



This Canadian Genius Created Modern AI

Geoff Hinton, an AI pioneer, has been working on getting computers to learn like humans for almost 40 years, and he revolutionized the field of Artificial Intelligence. Hinton was inspired by Frank Rosenblatt's perceptron, a neural network that mimics the brain, which was developed in the 1950s. Hinton's determination led to a breakthrough in the field of AI. In the mid-80s, Hinton and his collaborators created a multi-layered neural network, a deep neural network, which started to work in a lot of ways. However, they lacked necessary data and compute power until about 2006, when super-fast chips and massive amounts of data produced on the internet gave Hinton's algorithms a magical boost – computers could identify what was in an image, recognize speech, and translate languages. By 2012, Canada became an AI superpower, and neural nets and machine learning were featured on the front page of the New York Times.

  • 00:00:00 In this section, we learn about Geoff Hinton, who has been working on getting computers to learn like humans do for almost 40 years. This pursuit, which everyone else thought was hopeless, revolutionized the field of artificial intelligence, and companies like Google, Amazon, and Apple believe that it is the future of their companies. Hinton's inspiration came from Frank Rosenblatt, who developed the perceptron, a neural network that mimics the brain, in the 1950s. Rosenblatt's neural network was limited and didn't work well, but Hinton believed that neural networks can work since the brain is just a big neural network. Hinton's resolve to pursue his idea led to a breakthrough in the field of artificial intelligence, and companies believe that it is the future of their companies.

  • 00:05:00 In this section, the video discusses how in the mid-80s Hinton and his collaborators progressed on making more complicated neural nets that could solve problems that simple ones couldn't. They created a multi-layered neural network, a deep neural network, which started to work in a lot of ways. However, they hit a ceiling as they lacked necessary data and compute power. Through the 90s and into the 2000s, Hinton was one of only a handful of people still pursuing this technology, and he was treated like a pariah. Until about 2006, when the arrival of super-fast chips and massive amounts of data produced on the internet gave Hinton's algorithms a magical boost – computers could identify what was in an image, recognize speech, and translate languages. By 2012, neural nets and machine learning were popping up on the front page of the New York Times, and Canada became an AI superpower.
This Canadian Genius Created Modern AI
This Canadian Genius Created Modern AI
  • 2018.06.25
  • www.youtube.com
For nearly 40 years, Geoff Hinton has been trying to get computers to learn like people do, a quest almost everyone thought was crazy or at least hopeless - ...
 

Geoffrey Hinton: The Foundations of Deep Learning




Geoffrey Hinton: The Foundations of Deep Learning

Godfather of artificial intelligence Geoffrey Hinton gives an overview of the foundations of deep learning. In this talk, Hinton breaks down the advances of neural networks, as applied to speech and object recognition, image segmentation and reading or generating natural written language.

Geoffrey Hinton discusses the foundations of deep learning, particularly the backpropagation algorithm and its evolution. Hinton explains how deep learning impacted early handwriting recognition and eventually led to winning the 2012 ImageNet competition. He also emphasizes the superiority of deep learning using vectors of neural activity over the traditional symbolic AI that used the same symbols in input, output, and the middle. The improvements in machine translation systems, image recognition, and their combination for natural reasoning are discussed, along with the potential for deep learning in interpreting medical images. Hinton concludes by highlighting the need for neural networks with parameters comparable to the human brain for achieving true natural language processing.

  • 00:00:00 In this section, Hinton explains the concept of backpropagation, which is the fundamental algorithm used in deep learning. He describes the traditional method of programming a computer by writing a program to follow, step-by-step, versus the preferred method of telling a computer how to learn through a neural network with a learning algorithm in it. This method involves using artificial neurons with input lines and weights that adapt by changing the strengths of those weights. Hinton also explains the process of adapting those connections by using a simple algorithm that works in ways similar to the idea of evolution, where networks can be tinkered with until they do good things. He concludes this section by outlining how calculus can be used to do the same process more efficiently.

  • 00:05:00 In this section, Geoffrey Hinton explains the struggles that deep learning had in its early days, particularly with the backpropagation algorithm. At first, people had given up on backpropagation because it wasn't working well, but few technical advances were made in Toronto, Montreal, and New York that improved it exponentially with a lot of label data and a lot of compute power, which has made it scalable as well. One of the practical problems that deep learning impacted first, was handwriting recognition. A group of students tried to apply Hinton's algorithm to speech recognition, which at that time had only a few million training examples, and it wasn't considered practical in terms of statistics. However, they were able to predict which phoneme was being said and string together plausible utterances, and such speech recognition systems are now being used widely in various end-to-end systems.

  • 00:10:00 In this section, Geoffrey Hinton discusses how they used deep learning neural nets to win the ImageNet competition in 2012. The system was able to recognize subjects in the images with almost half the error rate of conventional computer vision systems, which had plateaued at about 25% error rate. This success made a big impact as people realized the potential for deep learning neural nets in image recognition. Hinton also explains how recurrent nets are used to deal with sequences such as speech. The hidden neurons connect to themselves, allowing the system to accumulate information and learn through backpropagation. These algorithms were later used for machine translations by encoding a sentence in one language into a thought and then translating it into a sentence in another language.

  • 00:15:00 In this section, Geoffrey Hinton discusses the problem with symbolic AI and how the current form of deep learning solves that issue. The traditional view of symbolic AI assumed that the same kind of symbols used in input and output were used in the middle as well, whereas deep learning experts use vectors of neural activity to process data. The input text is encoded into a vector that incorporates features learned by the network. The decoder network then takes this vector and produces the thought, which is then turned into a new language. Interestingly, such systems work better without letting in too much linguistic knowledge. Instead, Google Translate uses a fixed alphabet of 32,000 fragments of a language and trains the network with backpropagation, where it starts with random weights and volume of data to consistently improve the weights.

  • 00:20:00 In this section, Geoffrey Hinton describes some of the improvements made to machine translation systems, such as the addition of attention and the use of word fragments instead of whole words. He also discusses the combination of image recognition and language generation, and how this can lead to natural reasoning in machines. Despite the success of these systems, Hinton suggests that we will need neural networks with a number of parameters comparable to the human brain in order to achieve true natural language processing.

  • 00:25:00 In this section, Geoffrey Hinton discusses the potential for deep learning algorithms to exceed human performance in interpreting medical images. He notes that there is already a system for detecting skin cancers that is as good as a dermatologist, and with further training on more images, it could perform significantly better. Hinton also points out that a neural network trained on labels produced by doctors can sometimes outperform the doctors themselves, as the network can figure out what is going on when the doctors disagree. Finally, he tells a story about a student who won a competition for predicting whether a molecule will bind to something using a neural network with multiple layers of rectified linear units and far more parameters than training cases, and did so without knowing the name of the field of study.
Geoffrey Hinton: The Foundations of Deep Learning
Geoffrey Hinton: The Foundations of Deep Learning
  • 2018.02.07
  • www.youtube.com
Godfather of artificial intelligence Geoffrey Hinton gives an overview of the foundations of deep learning. In this talk, Hinton breaks down the advances of ...
 

Heroes of Deep Learning: Andrew Ng interviews Geoffrey Hinton




Heroes of Deep Learning: Andrew Ng interviews Geoffrey Hinton

Geoffrey Hinton, a leading figure in deep learning, discussed his journey and contributions to the field in an interview with Andrew Ng. He talks about the origins of word embeddings, restricted Boltzmann machines' developments, and his recent work on fast weights and capsules. Hinton notes the crucial role of unsupervised learning in deep learning advancements and advises learners to read widely, work on large-scale projects, and find advisors with similar interests. Hinton believes there is a significant change occurring in computing, where computers learn by showing, and cautions that universities must catch up with industry in training researchers for this new approach.

  • 00:00:00 Geoffrey Hinton discusses how he got interested in AI and machine learning. In high school, a friend introduced him to the idea of the brain using holograms, which sparked his interest in how the brain stores memories. He studied physiology and physics in university but switched to psychology when he found it inadequate for explaining the brain's functioning. After taking time off to be a carpenter, he went to Edinburgh to study AI with Longer Higgins, who was skeptical of neural networks at the time. Hinton eventually got a PhD in AI and went to California, where thinking about how the brain works was seen as a positive thing. He collaborated with David Rumelhart on the Backpropagation Algorithm, which they published in 1986 and ultimately helped the community accept the algorithm.

  • 00:05:00 In this section, Hinton discusses the origins of word embeddings, which allowed backprop to learn representations for words, and the fact that Stuart Sutherland was impressed with how, by training a model on triplets of words, the program could learn features of the semantics such as nationality, generation, and more. Hinton notes that the development of word embeddings combined two different views of knowledge (a psychologist's view of concepts as bundles of features with the AI view of concepts as how they relate to other concepts), which boosted the paper's acceptance rate. In the early 90s, Bengio showed that developers could use a similar approach to derive word embeddings from data, impressing many. Later, Hinton talked about the developments made with restricted boltzmann machines (RBMs), which were a significant factor in the resurgence of deep neural networks.

  • 00:10:00 In this section, Geoffrey Hinton talks about his work on training restricted Boltzmann machines with one layer of hidden features and using those learned features as data to train another layer. He explains that this approach led to the development of an efficient way of doing inference in sigmoid belief nets, which was a significant improvement over previous methods. He also discusses his work on variational methods and the development of the math behind the use of rectified linear units (ReLU) in neural networks. Finally, he notes that in 2014, he gave a talk at Google about using ReLU and initializing with the identity matrix, which copies patterns in the layer below, leading to significant improvements in training deep neural networks.

  • 00:15:00 Geoffrey Hinton discusses his regrets about not pursuing the idea of initializing networks with the identity, which would allow for efficient training of deep neural networks. He shares his thoughts on the relationship between backpropagation and the brain, stating that if backpropagation is a good algorithm for learning, then the brain could have figured out how to implement it. He proposes the idea of using fast weights that hold short-term memory to deal with multiple time scales in deep learning, which he first presented in his graduate school days.

  • 00:20:00 In this section of the interview, Geoffrey Hinton discusses his more recent work on fast weights and recursive calls, which involves using fast weights to store the memory of the activity states of neurons during a recursive call. He also talks about his idea of capsules, which involves representing multi-dimensional entities by a vector and using different neurons' activities to represent various aspects of that feature. The neurons are grouped into little bundles that represent different coordinates of the feature or subset of capsules, each capable of representing one instance of a feature with many different properties, rather than one scalar property, as in traditional neural nets. These capsules can then route information via a system of routing by agreement, which allows for better filtering and generalization of neural networks from limited data. Despite experiencing rejections from papers on this idea, Hinton remains optimistic and persistent in pursuing capsules.

  • 00:25:00 Geoffrey Hinton discusses how his thinking about AI and deep learning has evolved over several decades. He talks about how he was initially interested in back-propagation and discriminative learning and then shifted his focus to unsupervised learning in the early 90s. Hinton also talks about how supervised learning has worked incredibly well in the last decade, but he still believes that unsupervised learning will be crucial for further advancements. He mentions variational auto-encoders and generative adversarial networks as promising ideas for unsupervised learning. Hinton also provides advice to people who want to break into deep learning, recommending that they read as much as they can and try to work on a large-scale project to gain experience.

  • 00:30:00 In this section, the conversation revolves around advice for researchers and learners in the field of AI and deep learning. Hinton suggests that creative researchers should read a little bit of literature and look for something that everyone is doing wrong, and then figure out how to do it right. He also advises never to stop programming and to trust your intuitions. He encourages grad students to find an advisor who has similar beliefs and interests to their own to get the most useful advice. In terms of whether to join a PhD program or a top research group in a corporation, Hinton notes that there is currently a shortage of academics trained in deep learning, but that he thinks it will be temporary as departments catch up with the changing landscape of the field.

  • 00:35:00 Geoffrey Hinton explains that there is a significant change occurring in the use of computers, where instead of programming them, we now show them and they figure it out. This new approach to computing is different and significant, and computer science departments must recognize and welcome the showing approach to be as big as programming in impacting computer science. Though big companies are now training people on this new approach, Hinton believes it won't be long until universities catch up. Hinton goes on to explain the paradigm shift in AI from a belief that the representations needed for intelligence were symbolic expressions of some cleaned-up logic to the current view that thoughts are just great big vectors of neural activity.
 

Heroes of Deep Learning: Andrew Ng interviews Yann LeCun



Heroes of Deep Learning: Andrew Ng interviews Yann LeCun

In this interview between Andrew Ng and Yann LeCun, LeCun discusses his early interest in AI and the discovery of neural nets. He also describes his work on convolutional neural networks and the history behind CNNs. LeCun talks about how he persisted in the field, despite lack of interest in neural networks in the mid-90s, and eventually his work on CNNs took over the field of computer vision. He also discusses the defining moment in computer vision when the AlexNet team won the 2012 ImageNet competition, and advises those seeking a career in AI and machine learning to make themselves useful by contributing to open-source projects or implementing algorithms.

  • 00:00:00 In this section, Yann LeCun talks about his early interest in artificial intelligence and how he stumbled upon the concept of neural nets. He describes how he discovered research papers on the perceptron and automata networks, inspiring him to research how to train neural nets with multiple layers. This led him to meet people at an independent lab in France who were interested in the same thing, and ultimately to his work with Boltzmann machines.

  • 00:05:00 In this section, LeCun talks about how he met Terry Zaleski, who was working on backpropagation at the time, and how they both independently invented it before meeting. He also mentions how he began working on convolutional nets during his time at AT&T Bell Labs and how he tested them on a small dataset that he created by drawing characters with his mouse, and how this led to the creation of the USPS dataset with 5,000 training samples. He trained a convolutional net on this dataset and achieved better results than other methods being used at the time.

  • 00:10:00 In this section of the interview, Yann LeCun discusses the history of convolutional neural networks, or CNNs. He talks about the first version of commercial net that they developed at Bell Labs, which did not have separate subsampling and pooling layers, and how they had to make significant improvements to the network to reduce computation time. LeCun also shares an interesting story about giving a talk on CNNs, where he was told by Geoff Hinton that "if you do all the sensible things, it actually works". However, despite the promising results, CNNs were not widely adopted outside of AT&T due to a lack of internet, standardized software, and hardware platforms.

  • 00:15:00 In this section, Yann LeCun discusses his work on character recognition and how it led him to start the project "Digital Djvu" to digitally store and compress scanned documents for internet sharing. He also talks about how he always believed that deep learning techniques would eventually become useful, especially with the growing power of computers, but due to the lack of interest in the mid-90s, there were about 7 years where no one was researching neural networks. Despite this setback, LeCun continued to persist in the field, and his work on convolutional neural networks took over the field of computer vision and has started to encroach significantly into other fields.

  • 00:20:00 In this section of the video, Yann LeCun describes the defining moment in computer vision community when the AlexNet team won the 2012 ImageNet competition by a large margin, surprising most of the community. LeCun goes on to discuss his unique point of view on how corporate research should be done, explaining that he was given a lot of freedom to set up Facebook AI Research (FAIR) the way he thought was most appropriate, with an emphasis on open research and collaboration with universities. He even mentions that the vast majority of his publications in the last four years were with his students at NYU.

  • 00:25:00 In this section, Yann LeCun advises those seeking a career in AI and machine learning to make themselves useful by contributing to an open-source project or implementing an algorithm and making it available to others. He believes that the tools and resources available now make it easy for people to get involved at some level, even high school students. By making interesting and useful contributions, individuals can get noticed and potentially land a job at a desired company or be accepted into a ph.d program.
Heroes of Deep Learning: Andrew Ng interviews Yann LeCun
Heroes of Deep Learning: Andrew Ng interviews Yann LeCun
  • 2018.04.07
  • www.youtube.com
As part of the course https://www.coursera.org/learn/convolutional-neural-networks
 

Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow




Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow

In an interview with Andrew Ng, Ian Goodfellow talks about his passion for deep learning and how he got interested in the field while studying at Stanford. Goodfellow discusses his invention of generative adversarial networks (GANs) and their potential in deep learning, while also emphasizing the need to make GANs more reliable.  He reflects on how his thinking about AI and deep learning has evolved over the years, from simply getting the technology to work for AI-related tasks to exploring the full potential of deep learning models. Goodfellow also shares advice for those wanting to get involved in AI, stating that writing good code and building security into machine learning algorithms from the beginning are crucial.

  • 00:00:00 In this section, Ian Goodfellow discusses how he became interested in the field of AI and deep learning, thanks to his undergraduate advisor at Stanford and Andrew Ng's internet AI class. He explains how he and a friend built one of the first GPU CUDA-based machines at Stanford, and how this led to his strong intuition that deep learning was the way to go in the future. Goodfellow goes on to talk about his invention of GANs and how he came up with the concept while studying generative models. Lastly, he reflects on a personal experience that reaffirmed his commitment to AI research.

  • 00:05:00 In this section, Ian Goodfellow discusses the potential of generative adversarial networks (GANs) and their future in deep learning. He explains that although GANs are currently being used for a variety of tasks, they can often be unreliable, and stabilizing them is a major focus of his research. Goodfellow believes that while GANs are important now, they will eventually be replaced by other forms of generative models if they are not made more reliable. He also discusses his experience co-authoring the first textbook on deep learning and emphasizes the importance of understanding the underlying math principles in mastering the field. Finally, Goodfellow reflects on how his thinking about AI and deep learning has evolved over the years, from simply getting the technology to work for AI-related tasks to exploring the full potential of deep learning models.

  • 00:10:00 In this section, Ian Goodfellow discusses the evolution of deep learning and the plethora of paths that exist in AI. He shares advice for those wanting to get involved in AI, stating that writing good code and putting it on GitHub can get attention, and working on a project alongside reading books could be helpful. He also talks about the importance of building security into machine learning algorithms from the beginning, instead of adding it in later. These measures would ensure that the algorithms are secure and would prevent security concerns arising later.
Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow
Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow
  • 2017.08.08
  • www.youtube.com
p vs np, probability, machine learning, ai, neural networks, data science, programming, statistics, math, mathematics
 

Heroes of Deep Learning: Andrew Ng interviews Andrej Karpathy




Heroes of Deep Learning: Andrew Ng interviews Andrej Karpathy

In an interview with Andrew Ng, Andrej Karpathy discusses his introduction to deep learning through a class with Geoff Hinton and how he became the human benchmark for the ImageNet image classification competition. He talks about the surprising results when software deep nets surpassed his performance and decided to teach others about it through the creation of an online course. Karpathy also discusses the future of AI and how the field will likely split into two trajectories: applied AI and AGI. He advises those who want to enter the field of deep learning to build a full understanding of the whole stack by implementing everything from scratch.

  • 00:00:00 In this section, Andrej Karpathy talks about how he first got interested in deep learning during his undergraduate studies at the University of Toronto while taking a class with Geoff Hinton. He also discusses how he became the human benchmark for the ImageNet image classification competition and created a Javascript interface to understand how the benchmark compared to human capabilities. He describes the challenge of categorizing images from a thousand categories and the surprising realization that a third of the ImageNet dataset was dogs, leading to spending an unusually long amount of time training on dog species.

  • 00:05:00 In this section, Andrej Karpathy talks about being surprised when software deep nets surpassed his performance on certain tasks. He discusses how the technology was transformative and decided to teach others about it through the creation of an online course. The ability to understand the technology and the fact that it keeps changing on a daily basis is what made the students excited. Karpathy also talks about how the field of deep learning is rapidly evolving and how general the technology has become. He is surprised at how well it works not only for ImageNet but also for fine-tuning and transfer learning. He is also surprised by how unsupervised learning has still not delivered the promise that many researchers hoped it would.

  • 00:10:00 In this section, Karpathy discusses the future of AI, stating that the field will likely split into two trajectories. The first is applied AI, which involves using neural networks for supervised and possibly unsupervised learning, and the other is AGI, which focuses on creating a single neural network that is a complete dynamical system. He feels that decomposing AI into various parts and then putting them together is incorrect, and instead advocates for using a single neural network as a whole agent to create objectives that can optimize weights and obtain intelligent behavior. When asked about advice for those who want to enter the field of deep learning, Karpathy encourages building a full understanding of the whole stack by implementing everything from scratch, rather than just working with a framework like TensorFlow.
Heroes of Deep Learning: Andrew Ng interviews Andrej Karpathy
Heroes of Deep Learning: Andrew Ng interviews Andrej Karpathy
  • 2017.08.08
  • www.youtube.com
p vs np, probability, machine learning, ai, neural networks, data science, programming, statistics, math, mathematics
 

Heroes of Deep Learning: Andrew Ng interviews Director of AI Research at Apple, Ruslan Salakhutdinov




Heroes of Deep Learning: Andrew Ng interviews Director of AI Research at Apple, Ruslan Salakhutdinov

Ruslan Salakhutdinov, the Director of AI Research at Apple, discusses the evolution of deep learning, the challenges in training generative models and unsupervised learning, and the exciting frontiers in deep learning research. He also encourages researchers to explore different methods and not be afraid to innovate.
Salakhutdinov emphasizes the importance of building dialogue-based systems and ones that can read text intelligently, and the ultimate goal of achieving more human-like learning abilities.

  • 00:00:00 In this section, Ruslan Salakhutdinov discusses how he got started in deep learning, beginning with a chance meeting with Jeff Hinton who introduced him to the concept of using restrictive Boltzmann machines and supervised training. With Hinton's help and guidance, Salakhutdinov was able to co-author one of the very first papers on restricted Boltzmann machines, which helped jumpstart the resurgence of neural networks and deep learning. However, as computing power improved, researchers realized there was no need for pre-training using restrictive Boltzmann machines and were able to train deep models directly using traditional optimization techniques.

  • 00:05:00 In this section, Ruslan Salakhutdinov discusses the evolution of deep learning and the importance of pre-training in the early days when computers were slower. He also mentions the challenge of training generative models and unsupervised learning as compared to supervised learning. While there has been progress in generative modeling with techniques like variational autoencoders and energy models like those in his own lab, he believes more efficient and scalable ways of training unsupervised models need to be figured out. He emphasizes that this is an important area for those interested in deep learning to explore.

  • 00:10:00 In this section, Ruslan Salakhutdinov, the Director of AI Research at Apple, discusses the challenges in making use of a large amount of unlabeled data in machine learning, as well as advice for those wanting to enter the field. He encourages researchers to try different methods and not be afraid to innovate, citing one example of how he and his team tackled the hard problem of optimizing highly non-convex systems in neural nets. Salakhutdinov also discusses the pros and cons of doing a PhD versus joining a company in the field of deep learning, emphasizing that both academia and industry offer exciting opportunities for research and development.

  • 00:15:00 In this section, Ruslan Salakhutdinov discusses the exciting frontiers in deep learning research, specifically in areas such as deep reinforcement learning, reasoning and natural language understanding, and being able to learn from fewer examples. He notes that there has been a lot of progress in training AI in virtual worlds, and the challenge now lies in scaling these systems, developing new algorithms, and getting AI agents to communicate with each other. Additionally, he highlights the importance of building dialogue-based systems and ones that can read text intelligently. Finally, he mentions the goal of achieving more human-like learning abilities.
Heroes of Deep Learning: Andrew Ng interviews Director of AI Research at Apple, Ruslan Salakhutdinov
Heroes of Deep Learning: Andrew Ng interviews Director of AI Research at Apple, Ruslan Salakhutdinov
  • 2017.08.08
  • www.youtube.com
probability, machine learning, neural networks, data science, programming, statistics, math, mathematics, number theory, pi
 

Heroes of Deep Learning: Andrew Ng interviews Yoshua Bengio




Heroes of Deep Learning: Andrew Ng interviews Yoshua Bengio

Andrew Ng interviews Yoshua Bengio, and they discuss various topics related to deep learning. Bengio expresses how he got into deep learning and how his thinking about neural networks has evolved. He also discusses his contributions to developing word embeddings for sequences of words and deep learning with stacks of autoencoders. Additionally, Bengio emphasizes the importance of unsupervised learning and his interest in understanding the relationship between deep learning and the brain.
Bengio highlights the need for understanding the science of deep learning and proper research to tackle big challenges. Finally, they focus on the need for a strong foundational knowledge of mathematics for a career in deep learning and the importance of continued education.

  • 00:00:00 In this section, Yoshua Bengio discusses how he got into deep learning, starting with his love for science fiction and his graduate studies in 1985. He talks about his excitement when he discovered the world of thinking about human intelligence and how it might connect with artificial intelligence. Bengio also reflects on how his thinking about neural networks has evolved, from initial experiments to developing theories and solid justifications for why certain techniques, such as backprop and depth, work so well. Additionally, he talks about his surprise that the ReLU function works better than the traditional smooth nonlinearities he initially thought were necessary. Bengio emphasizes the importance of distributed information across the activation of many neurons and how it connects to the initial insights that got him excited about neural networks.

  • 00:05:00 In this section, Yoshua Bengio discusses his early work on using neural nets to tackle the curse of dimensionality and create efficient joint distributions over many random variables. He also mentions his work on creating word embeddings for sequences of words, which allow for generalization across words with similar semantic meanings. Bengio goes on to mention several other important inventions or ideas from his research group, including the development of deep learning with stacks of autoencoders and the use of attention in neural machine translation. He also discusses his interest in understanding the relationship between deep learning and the brain, and his work on developing a system similar to backpropagation that could be implemented by the brain.

  • 00:10:00 In this section, Yoshua Bengio talks about his inspiration from Geoff Hinton's thoughts on how the brain works and temporal code's potential use for it. He discusses how unsupervised learning is essential as it allows for the construction of mental models explaining the world without labeled data. He also talks about his combination of unsupervised learning and reinforcement learning to get a better understanding of the underlying concepts disentangled from each other by exploring and trying to control things. The problem with unsupervised learning research is that there are so many different ways to attack this problem, and there is no good definition of what constitutes a good objective function to measure if a system is doing well or not. Finally, Yoshua expresses that the current state of deep learning is still far from where he would like it to be, and he feels ambitious in taking it to the next level.

  • 00:15:00 In this section, Yoshua Bengio talks about his excitement over research focusing on fundamental principles of how computers can observe and interact with the world to discover how it works. He discusses how such research will lead to a better understanding of how the world works and hopes that it will help tackle big challenges such as transfer learning and joint nation issues. Bengio notes that experimentation on smaller problems can lead to quicker research cycles and better understanding, which can eventually be scaled up. He emphasizes the importance of understanding what is going on in deep learning and the need to share thoughts on the science of deep learning.

  • 00:20:00 In this section, Yoshua Bengio, a renowned figure in deep learning, discussed the importance of understanding the phenomena of interest and conducting proper research, rather than solely striving to beat benchmarks or competitors. For individuals who wish to enter the field, he emphasized the need for practice, including reading, coding, and experimenting. Bengio stated that while a strong background in computer science and math is helpful, individuals without previous knowledge of machine learning can still learn and become proficient within a few months.

  • 00:25:00 In this excerpt, Andrew Ng and Yoshua Bengio discuss the importance of having a strong foundational knowledge of mathematics such as algebra, optimization, and calculus when pursuing a career in deep learning. Bengio emphasizes the need for continued education and ongoing learning to stay up to date in the field. Both express gratitude for the opportunity to share their insights and knowledge with others.
Heroes of Deep Learning: Andrew Ng interviews Yoshua Bengio
Heroes of Deep Learning: Andrew Ng interviews Yoshua Bengio
  • 2017.08.08
  • www.youtube.com
p vs np, probability, machine learning, ai, neural networks, data science, programming, statistics, math, mathematics
 

Heroes of Deep Learning: Andrew Ng interviews Pieter Abbeel



Heroes of Deep Learning: Andrew Ng interviews Pieter Abbeel

Pieter Abbeel discusses the challenges and potential of deep reinforcement learning in this interview with Andrew Ng. He notes the need for further work in exploration, credit assignment, and generating negative examples. Abbeel also highlights safety concerns and the importance of collecting safe learning data when teaching robots to live autonomously. He advises individuals to pursue hands-on practice with popular frameworks and suggests the benefits of receiving mentorship from experienced professionals. Additionally, he suggests the need for reinforcement learning in giving machines objectives of achievement and notes the importance of behavioral cloning and supervised learning before adding the reinforcement learning component.

  • 00:00:00 In this section, Pieter Abbeel talks about how he initially became interested in engineering because of his interests in math and physics, and how that eventually led him to machine learning and deep reinforcement learning. He talks about the challenges that still exist in deep reinforcement learning, such as exploration and credit assignment, and how there is still a need for negative examples to be generated to improve these systems. He also notes that the successes of deep reinforcement learning have mainly been in short time horizons and that there is still a lot of work to be done in this field to enable systems to reason over longer time frames.

  • 00:05:00 In this section of the interview, Pieter Abbeel discusses the challenges of teaching a robot or software agent to live a life autonomously, pointing out that safety poses a set of issues, including the collection of safe learning data. He also shares his excitement over actively exploring and reinforcing learning algorithms to come up with more efficient mechanisms that can replace human design in the future. Finally, he offers advice to those pursuing a career in artificial intelligence, highlighting that the field offers vast job opportunities and suggesting online learning material such as Andrew Ng’s and Berkeley's deep learning courses to get started.

  • 00:10:00 In this section, Pieter Abbeel discusses how to start learning about deep learning and machine learning, emphasizing the importance of hands-on practice and experimentation with popular frameworks such as TensorFlow and PyTorch. He also discusses the pros and cons of pursuing a PhD versus getting a job at a big company, highlighting the advantages of receiving mentoring from experienced professionals. Abbeel then goes on to describe some of the successes of deep reinforcement learning, such as a robot learning to run or play classic Atari games from scratch, but notes that the next step is to figure out how to reuse this learned knowledge for future tasks. He also predicts that many businesses will rely on supervised learning with human assistance for the immediate future.

  • 00:15:00 In this section, Pieter Abbeel suggests the use of reinforcement learning in giving machines objectives of achievement rather than just matching human actions. The machine will be trained using behavioral cloning or supervised learning before adding the reinforcement learning component. This approach is time-consuming but effective in developing machines that can achieve set objectives. Reinforcement learning alone can be hazardous and time-consuming.
Heroes of Deep Learning: Andrew Ng interviews Pieter Abbeel
Heroes of Deep Learning: Andrew Ng interviews Pieter Abbeel
  • 2017.08.08
  • www.youtube.com
p vs np, probability, machine learning, ai, neural networks, data science, programming, statistics, math, mathematics
Reason: