Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Richard M. Golden, M.S.E.E., and B.S.E.E.. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Richard M. Golden, M.S.E.E., and B.S.E.E. or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LM101-068: How to Design Automatic Learning Rate Selection for Gradient Descent Type Machine Learning Algorithms

21:49
 
Share
 

Manage episode 230297534 series 2497400
Content provided by Richard M. Golden, M.S.E.E., and B.S.E.E.. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Richard M. Golden, M.S.E.E., and B.S.E.E. or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

This 68th episode of Learning Machines 101 discusses a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called the “stepsize” or “learning rate” and the identity of the perturbation vector is called the “search direction”. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. For more information, please visit the official website: www.learningmachines101.com

  continue reading

85 episodes

Artwork
iconShare
 
Manage episode 230297534 series 2497400
Content provided by Richard M. Golden, M.S.E.E., and B.S.E.E.. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Richard M. Golden, M.S.E.E., and B.S.E.E. or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

This 68th episode of Learning Machines 101 discusses a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called the “stepsize” or “learning rate” and the identity of the perturbation vector is called the “search direction”. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. For more information, please visit the official website: www.learningmachines101.com

  continue reading

85 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play