Artwork
iconShare
 
Manage episode 230562702 series 1397651
Content provided by Sam Putnam. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sam Putnam or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
I talk through generating an image of IRS tax return characters using a model trained on the IRS tax return dataset - NMIST. The authors trained for 70 hours on 32 GPUs. I used unconditioned image generation to create an image in 6 hours on my MacBook Pro CPU. I used the TensorFlow implementation of Conditional Image Generation with PixelCNN Decoders (https://arxiv.org/abs/1606.05328) by a student named Anant Gupta and learned that reasonable-looking digits can be generated with significantly fewer training steps, as soon as the training loss approaches that reached by the DeepMind authors. Each step is detailed at https://medium.com/@SamPutnam/this-is-the-1st-deep-learning-zero-to-one-newsletter-this-one-is-called-image-generation-935bcaf0f37c
  continue reading

6 episodes