Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that wil ...
…
continue reading

1
LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes
35:29
35:29
Play later
Play later
Lists
Like
Liked
35:29This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of outcomes in a space-time continuum which characterizes our physical world. Such a set is called an “environmental event”. The machine learning algorithm uses information about the frequency of environmental events to support lea…
…
continue reading

1
LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges
30:51
30:51
Play later
Play later
Lists
Like
Liked
30:51This 85th episode of Learning Machines 101 discusses formal convergence guarantees for a broad class of machine learning algorithms designed to minimize smooth non-convex objective functions using batch learning methods. In particular, a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update …
…
continue reading

1
LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems
33:13
33:13
Play later
Play later
Lists
Like
Liked
33:13In this episode of Learning Machines 101, we review Chapter 6 of my book “Statistical Machine Learning” which introduces methods for analyzing the behavior of machine inference algorithms and machine learning algorithms as dynamical systems. We show that when dynamical systems can be viewed as special types of optimization algorithms, the behavior …
…
continue reading

1
LM101-083: Ch5: How to Use Calculus to Design Learning Machines
34:22
34:22
Play later
Play later
Lists
Like
Liked
34:22This particular podcast covers the material from Chapter 5 of my new book “Statistical Machine Learning: A unified framework” which is now available! The book chapter shows how matrix calculus is very useful for the analysis and design of both linear and nonlinear learning machines with lots of examples. We discuss how to use the matrix chain rule …
…
continue reading

1
LM101-082: Ch4: How to Analyze and Design Linear Machines
29:05
29:05
Play later
Play later
Lists
Like
Liked
29:05The main focus of this particular episode covers the material in Chapter 4 of my new forthcoming book titled “Statistical Machine Learning: A unified framework.” Chapter 4 is titled “Linear Algebra for Machine Learning. Many important and widely used machine learning algorithms may be interpreted as linear machines and this chapter shows how to use…
…
continue reading

1
LM101-081: Ch3: How to Define Machine Learning (or at Least Try)
37:20
37:20
Play later
Play later
Lists
Like
Liked
37:20This particular podcast covers the material in Chapter 3 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 3 of my new book which discusses how to formally define machine learning algorithms. Briefly, a learning machine is viewed as a dynamical system that …
…
continue reading

1
LM101-080: Ch2: How to Represent Knowledge using Set Theory
31:43
31:43
Play later
Play later
Lists
Like
Liked
31:43This particular podcast covers the material in Chapter 2 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 2 of my new book, which discusses how to represent knowledge using set theory notation. Chapter 2 is titled “Set Theory for Concept Modeling”.…
…
continue reading

1
LM101-079: Ch1: How to View Learning as Risk Minimization
26:07
26:07
Play later
Play later
Lists
Like
Liked
26:07This particular podcast covers the material in Chapter 1 of my new (unpublished) book “Statistical Machine Learning: A unified framework”. In this episode we discuss Chapter 1 of my new book, which shows how supervised, unsupervised, and reinforcement learning algorithms can be viewed as special cases of a general empirical risk minimization framew…
…
continue reading

1
LM101-078: Ch0: How to Become a Machine Learning Expert
39:18
39:18
Play later
Play later
Lists
Like
Liked
39:18This particular podcast (Episode 78 of Learning Machines 101) is the initial episode in a new special series of episodes designed to provide commentary on a new book that I am in the process of writing. In this episode we discuss books, software, courses, and podcasts designed to help you become a machine learning expert! For more information, chec…
…
continue reading

1
LM101-077: How to Choose the Best Model using BIC
24:15
24:15
Play later
Play later
Lists
Like
Liked
24:15In this 77th episode of www.learningmachines101.com , we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training da…
…
continue reading

1
LM101-076: How to Choose the Best Model using AIC and GAIC
28:17
28:17
Play later
Play later
Lists
Like
Liked
28:17In this episode, we explain the proper semantic interpretation of the Akaike Information Criterion (AIC) and the Generalized Akaike Information Criterion (GAIC) for the purpose of picking the best model for a given set of training data. The precise semantic interpretation of these model selection criteria is provided, explicit assumptions are provi…
…
continue reading

1
LM101-075: Can computers think? A Mathematician's Response (remix)
36:26
36:26
Play later
Play later
Lists
Like
Liked
36:26In this episode, we explore the question of what can computers do as well as what computers can’t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines. This episode is dedicated to the …
…
continue reading

1
LM101-074: How to Represent Knowledge using Logical Rules (remix)
19:22
19:22
Play later
Play later
Lists
Like
Liked
19:22In this episode we will learn how to use “rules” to represent knowledge. We discuss how this works in practice and we explain how these ideas are implemented in a special architecture called the production system. The challenges of representing knowledge using rules are also discussed. Specifically, these challenges include: issues of feature repre…
…
continue reading

1
LM101-073: How to Build a Machine that Learns to Play Checkers (remix)
24:58
24:58
Play later
Play later
Lists
Like
Liked
24:58This is a remix of the original second episode Learning Machines 101 which describes in a little more detail how the computer program that Arthur Samuel developed in 1959 learned to play checkers by itself without human intervention using a mixture of classical artificial intelligence search methods and artificial neural network learning algorithms…
…
continue reading

1
LM101-072: Welcome to the Big Artificial Intelligence Magic Show! (Remix of LM101-001 and LM101-002)
22:07
22:07
Play later
Play later
Lists
Like
Liked
22:07This podcast is basically a remix of the first and second episodes of Learning Machines 101 and is intended to serve as the new introduction to the Learning Machines 101 podcast series. The search for common organizing principles which could support the foundations of machine learning and artificial intelligence is discussed and the concept of the …
…
continue reading

1
LM101-071: How to Model Common Sense Knowledge using First-Order Logic and Markov Logic Nets
31:40
31:40
Play later
Play later
Lists
Like
Liked
31:40In this podcast, we provide some insights into the complexity of common sense. First, we discuss the importance of building common sense into learning machines. Second, we discuss how first-order logic can be used to represent common sense knowledge. Third, we describe a large database of common sense knowledge where the knowledge is represented us…
…
continue reading

1
LM101-070: How to Identify Facial Emotion Expressions in Images Using Stochastic Neighborhood Embedding
32:04
32:04
Play later
Play later
Lists
Like
Liked
32:04This 70th episode of Learning Machines 101 we discuss how to identify facial emotion expressions in images using an advanced clustering technique called Stochastic Neighborhood Embedding. We discuss the concept of recognizing facial emotions in images including applications to problems such as: improving online communication quality, identifying su…
…
continue reading

1
LM101-069: What Happened at the 2017 Neural Information Processing Systems Conference?
23:20
23:20
Play later
Play later
Lists
Like
Liked
23:20This 69th episode of Learning Machines 101 provides a short overview of the 2017 Neural Information Processing Systems conference with a focus on the development of methods for teaching learning machines rather than simply training them on examples. In addition, a book review of the book “Deep Learning” is provided. #nips2017…
…
continue reading

1
LM101-068: How to Design Automatic Learning Rate Selection for Gradient Descent Type Machine Learning Algorithms
21:49
21:49
Play later
Play later
Lists
Like
Liked
21:49This 68th episode of Learning Machines 101 discusses a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the trai…
…
continue reading

1
LM101-067: How to use Expectation Maximization to Learn Constraint Satisfaction Solutions (Rerun)
25:40
25:40
Play later
Play later
Lists
Like
Liked
25:40In this episode we discuss how to learn to solve constraint satisfaction inference problems. The goal of the inference process is to infer the most probable values for unobservable variables. These constraints, however, can be learned from experience. Specifically, the important machine learning method for handling unobservable components of the da…
…
continue reading

1
LM101-066: How to Solve Constraint Satisfaction Problems using MCMC Methods (Rerun)
34:00
34:00
Play later
Play later
Lists
Like
Liked
34:00In this episode of Learning Machines 101 (www.learningmachines101.com) we discuss how to solve constraint satisfaction inference problems where knowledge is represented as a large unordered collection of complicated probabilistic constraints among a collection of variables. The goal of the inference process is to infer the most probable values of t…
…
continue reading

1
LM101-065: How to Design Gradient Descent Learning Machines (Rerun)
30:00
30:00
Play later
Play later
Lists
Like
Liked
30:00In this episode rerun we introduce the concept of gradient descent which is the fundamental principle underlying learning in the majority of deep learning and neural network learning algorithms. Check out the website: www.learningmachines101.com to obtain a transcript of this episode!
…
continue reading

1
LM101-064: Stochastic Model Search and Selection with Genetic Algorithms (Rerun)
28:04
28:04
Play later
Play later
Lists
Like
Liked
28:04In this rerun of episode 24 we explore the concept of evolutionary learning machines. That is, learning machines that reproduce themselves in the hopes of evolving into more intelligent and smarter learning machines. This leads us to the topic of stochastic model search and evaluation. Check out the blog with additional technical references at: www…
…
continue reading

1
LM101-063: How to Transform a Supervised Learning Machine into a Policy Gradient Reinforcement Learning Machine
22:04
22:04
Play later
Play later
Lists
Like
Liked
22:04This 63rd episode of Learning Machines 101 discusses how to build reinforcement learning machines which become smarter with experience but do not use this acquired knowledge to modify their actions and behaviors. This episode explains how to build reinforcement learning machines whose behavior evolves as the learning machines become increasingly sm…
…
continue reading

1
LM101-062: How to Transform a Supervised Learning Machine into a Value Function Reinforcement Learning Machine
31:05
31:05
Play later
Play later
Lists
Like
Liked
31:05This 62nd episode of Learning Machines 101 (www.learningmachines101.com) discusses how to design reinforcement learning machines using your knowledge of how to build supervised learning machines! Specifically, we focus on Value Function Reinforcement Learning Machines which estimate the unobservable total penalty associated with an episode when onl…
…
continue reading

1
LM101-061: What happened at the Reinforcement Learning Tutorial? (RERUN)
29:15
29:15
Play later
Play later
Lists
Like
Liked
29:15This is the third of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode reviews and discusses topics associated with the Introduction to Reinforcement Le…
…
continue reading

1
LM101-060: How to Monitor Machine Learning Algorithms using Anomaly Detection Machine Learning Algorithms
29:32
29:32
Play later
Play later
Lists
Like
Liked
29:32This 60th episode of Learning Machines 101 discusses how one can use novelty detection or anomaly detection machine learning algorithms to monitor the performance of other machine learning algorithms deployed in real world environments. The episode is based upon a review of a talk by Chief Data Scientist Ira Cohen of Anodot presented at the 2016 Be…
…
continue reading

1
LM101-059: How to Properly Introduce a Neural Network
29:56
29:56
Play later
Play later
Lists
Like
Liked
29:56I discuss the concept of a “neural network” by providing some examples of recent successes in neural network machine learning algorithms and providing a historical perspective on the evolution of the neural network concept from its biological origins. For more details visit us at: www.learningmachines101.com…
…
continue reading

1
LM101-058: How to Identify Hallucinating Learning Machines using Specification Analysis
19:38
19:38
Play later
Play later
Lists
Like
Liked
19:38In this 58th episode of Learning Machines 101, I’ll be discussing an important new scientific breakthrough published just last week for the first time in the journal Econometrics in the special issue on model misspecification titled “Generalized Information Matrix Tests for Detecting Model Misspecification”. The article provides a unified theoretic…
…
continue reading

1
LM101-057: How to Catch Spammers using Spectral Clustering
19:54
19:54
Play later
Play later
Lists
Like
Liked
19:54In this 57th episode, we explain how to use unsupervised machine learning algorithms to catch internet criminals who try to steal your money electronically! Check it out at: www.learningmachines101.com
…
continue reading

1
LM101-056: How to Build Generative Latent Probabilistic Topic Models for Search Engine and Recommender System Applications
27:59
27:59
Play later
Play later
Lists
Like
Liked
27:59In this NEW episode we discuss Latent Semantic Indexing type machine learning algorithms which have a PROBABILISTIC interpretation. We explain why such a probabilistic interpretation is important and discuss how such algorithms can be used in the design of document retrieval systems, search engines, and recommender systems. Check us out at: www.lea…
…
continue reading

1
LM101-055: How to Learn Statistical Regularities using MAP and Maximum Likelihood Estimation (Rerun)
35:06
35:06
Play later
Play later
Lists
Like
Liked
35:06In this rerun of Episode 10, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. In particular, the episode introduces fundamental machine learning concepts such as: probability models, model…
…
continue reading

1
LM101-054: How to Build Search Engine and Recommender Systems using Latent Semantic Analysis (RERUN)
29:35
29:35
Play later
Play later
Lists
Like
Liked
29:35Welcome to the 54th Episode of Learning Machines 101 titled "How to Build a Search Engine, Automatically Grade Essays, and Identify Synonyms using Latent Semantic Analysis" (rerun of Episode 40). The principles in this episode are also applicable to the problem of "Market Basket Analysis" and the design of Recommender Systems. Check it out at: www.…
…
continue reading

1
LM101-053: How to Enhance Learning Machines with Swarm Intelligence (Particle Swarm Optimization)
26:50
26:50
Play later
Play later
Lists
Like
Liked
26:50In this 53rd episode of Learning Machines 101, we introduce the concept of a Swarm Intelligence with respect to Particle Swarm Optimization Algorithms. The essential idea of “Swarm Intelligence” is that you have a group of individual entities which behave in a coordinated manner yet there is no master control center providing directions to all of t…
…
continue reading

1
LM101-052: How to Use the Kernel Trick to Make Hidden Units Disappear
28:57
28:57
Play later
Play later
Lists
Like
Liked
28:57Today, we discuss a simple yet powerful idea which began popular in the machine learning literature in the 1990s which is called “The Kernel Trick”. The basic idea of the “Kernel Trick” is that you specify similarity relationships among input patterns rather than a recoding transformation to solve a nonlinear problem with a linear learning machine.…
…
continue reading

1
LM101-051: How to Use Radial Basis Function Perceptron Software for Supervised Learning[Rerun]
29:04
29:04
Play later
Play later
Lists
Like
Liked
29:04This particular podcast is a RERUN of Episode 20 and describes step by step how to download free software which can be used to make predictions using a feedforward artificial neural network whose hidden units are radial basis functions. This is essentially a nonlinear regression modeling problem. We show the performance of this nonlinear learning m…
…
continue reading

1
LM101-050: How to Use Linear Machine Learning Software to Make Predictions (Linear Regression Software)[RERUN]
30:32
30:32
Play later
Play later
Lists
Like
Liked
30:32In this episode we will explain how to download and use freemachine learning software from the website: www.learningmachines101.com.This podcast is concerned with the very practical issuesassociated with downloading and installing machine learningsoftware on your computer. If you follow these instructions, by theend of this episode you will have in…
…
continue reading

1
LM101-049: How to Experiment with Lunar Lander Software
34:40
34:40
Play later
Play later
Lists
Like
Liked
34:40In this episode we continue the discussion of learning when the actions of the learning machine can alter the characteristics of the learning machine’s statistical environment. We describe how to download free lunar lander software so you can experiment with an autopilot for a lunar lander module that learns from its experiences and describe the re…
…
continue reading

1
LM101-048: How to Build a Lunar Lander Autopilot Learning Machine (Rerun)
31:27
31:27
Play later
Play later
Lists
Like
Liked
31:27In this episode we consider the problem of learning when the actions of the learning machine can alter the characteristics of the learning machine’s statistical environment. We illustrate the solution to this problem by designing an autopilot for a lunar lander module that learns from its experiences. For more information, check out: www.learningma…
…
continue reading

1
LM101-047: How Build a Support Vector Machine to Classify Patterns (Rerun)
35:29
35:29
Play later
Play later
Lists
Like
Liked
35:29We explain how to estimate the parameters of such machines to classify a pattern vector as a member of one of two categories as well as identify special pattern vectors called “support vectors” which are important for characterizing the Support Vector Machine decision boundary. The relationship of Support Vector Machine parameter estimation and log…
…
continue reading

1
LM101-046: How to Optimize Student Learning using Recurrent Neural Networks (Educational Technology)
23:19
23:19
Play later
Play later
Lists
Like
Liked
23:19In this episode, we briefly review Item Response Theory and Bayesian Network Theory methods for the assessment and optimization of student learning and then describe a poster presented on the first day of the Neural Information Processing Systems conference in December 2015 in Montreal which describes a Recurrent Neural Network approach for the ass…
…
continue reading

1
LM101-045: How to Build a Deep Learning Machine for Answering Questions about Images
21:51
21:51
Play later
Play later
Lists
Like
Liked
21:51In this episode we discuss just one out of the 102 different posters which was presented on the first night of the 2015 Neural Information Processing Systems Conference. This presentation describes a system which can answer simple questions about images. Check out: www.learningmachines101.com for additional details!!…
…
continue reading

1
LM101-044: What happened at the Deep Reinforcement Learning Tutorial at the 2015 Neural Information Processing Systems Conference?
31:38
31:38
Play later
Play later
Lists
Like
Liked
31:38This is the third of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode reviews and discusses topics associated with the Introduction to Reinforcement Le…
…
continue reading

1
LM101-043: How to Learn a Monte Carlo Markov Chain to Solve Constraint Satisfaction Problems (Rerun of Episode 22)
27:38
27:38
Play later
Play later
Lists
Like
Liked
27:38Welcome to the 43rd Episode of Learning Machines 101!We are currently presenting a subsequence of episodes covering the events of the recent Neural Information Processing Systems Conference. However, this weekwill digress with a rerun of Episode 22 which nicely complements our previous discussion of the Monte Carlo Markov Chain Algorithm Tutorial. …
…
continue reading

1
LM101-042: What happened at the Monte Carlo Markov Chain (MCMC) Inference Methods Tutorial at the 2015 Neural Information Processing Systems Conference?
25:46
25:46
Play later
Play later
Lists
Like
Liked
25:46This is the second of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode reviews and discusses topics associated with the Monte Carlo Markov Chain (MCMC)…
…
continue reading

1
LM101-041: What happened at the 2015 Neural Information Processing Systems Deep Learning Tutorial?
29:38
29:38
Play later
Play later
Lists
Like
Liked
29:38This is the first of a short subsequence of podcasts which provides a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the field of Machine Learning. This episode introduces the Neural Information Processing Systems Conference and reviews …
…
continue reading

1
LM101-040: How to Build a Search Engine, Automatically Grade Essays, and Identify Synonyms using Latent Semantic Analysis
28:15
28:15
Play later
Play later
Lists
Like
Liked
28:15In this episode we introduce a very powerful approach for computing semantic similarity between documents. Here, the terminology “document” could refer to a web-page, a word document, a paragraph of text, an essay, a sentence, or even just a single word. Two semantically similar documents, therefore, will discuss many of the same topics while two s…
…
continue reading

1
LM101-039: How to Solve Large Complex Constraint Satisfaction Problems (Monte Carlo Markov Chain and Markov Fields)[Rerun]
35:17
35:17
Play later
Play later
Lists
Like
Liked
35:17In this episode we discuss how to solve constraint satisfaction inference problems where knowledge is represented as a large unordered collection of complicated probabilistic constraints among a collection of variables. The goal of the inference process is to infer the most probable values of the unobservable variables given the observable variable…
…
continue reading

1
LM101-038: How to Model Knowledge Skill Growth Over Time using Bayesian Nets
23:55
23:55
Play later
Play later
Lists
Like
Liked
23:55In this episode, we examine the problem of developing an advanced artificially intelligent technology which is capable of tracking knowledge growth in students in real-time, representing the knowledge state of a student a skill profile, and automatically defining the concept of a skill without human intervention! The approach can be viewed as a sop…
…
continue reading

1
LM101-037: How to Build a Smart Computerized Adaptive Testing Machine using Item Response Theory
34:56
34:56
Play later
Play later
Lists
Like
Liked
34:56In this episode, we discuss the problem of how to build a smart computerized adaptive testing machine using Item Response Theory (IRT). Suppose that you are teaching a student a particular target set of knowledge. Examples of such situations obviously occur in nursery school, elementary school, junior high school, high school, and college. However,…
…
continue reading