Manage episode 493524366 series 2969169
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!
- Intro to Bayes Course (first 2 lessons free)
- Advanced Regression Course (first 2 lessons free)
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!
Visit our Patreon page to unlock exclusive Bayesian swag ;)
Takeaways:
- INLA is a fast, deterministic method for Bayesian inference.
- INLA is particularly useful for large datasets and complex models.
- The R INLA package is widely used for implementing INLA methodology.
- INLA has been applied in various fields, including epidemiology and air quality control.
- Computational challenges in INLA are minimal compared to MCMC methods.
- The Smart Gradient method enhances the efficiency of INLA.
- INLA can handle various likelihoods, not just Gaussian.
- SPDs allow for more efficient computations in spatial modeling.
- The new INLA methodology scales better for large datasets, especially in medical imaging.
- Priors in Bayesian models can significantly impact the results and should be chosen carefully.
- Penalized complexity priors (PC priors) help prevent overfitting in models.
- Understanding the underlying mathematics of priors is crucial for effective modeling.
- The integration of GPUs in computational methods is a key future direction for INLA.
- The development of new sparse solvers is essential for handling larger models efficiently.
Chapters:
06:06 Understanding INLA: A Comparison with MCMC
08:46 Applications of INLA in Real-World Scenarios
11:58 Latent Gaussian Models and Their Importance
15:12 Impactful Applications of INLA in Health and Environment
18:09 Computational Challenges and Solutions in INLA
21:06 Stochastic Partial Differential Equations in Spatial Modeling
23:55 Future Directions and Innovations in INLA
39:51 Exploring Stochastic Differential Equations
43:02 Advancements in INLA Methodology
50:40 Getting Started with INLA
56:25 Understanding Priors in Bayesian Models
Thank you to my Patrons for making this episode possible!
Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Adam Tilmar Jakobsen.
Links from the show:
- R-INLA webpage: https://www.r-inla.org/
- R-INLA discussion group: https://groups.google.com/g/r-inla-discussion-group
- Haavard’s page: https://cemse.kaust.edu.sa/profiles/haavard-rue
- Haavard on Google Scholar: https://scholar.google.co.uk/citations?user=VJOn_ZkAAAAJ&hl=en
- Janet’s page: https://cemse.kaust.edu.sa/profiles/janet-van-niekerk
- Janet on LinkedIn: https://www.linkedin.com/in/janet-van-niekerk-b1803b8a/
- Janet on Google Scholar: https://scholar.google.com/citations?user=rZOmGkAAAAAJ&hl=en
- Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations (original and classic formulation): https://users.wpi.edu/~balnan/INLAjrssB2009.pdf
- A new avenue for Bayesian inference with INLA (modern formulation of INLA and the current default in the R-INLA package): https://www.sciencedirect.com/science/article/pii/S0167947323000038
- Smart Gradient - An adaptive technique for improving gradient estimation: https://www.aimsciences.org/article/doi/10.3934/fods.2021037
SPDE-INLA book and other resources:
- https://rss.onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2011.00777.x
- https://becarioprecario.bitbucket.io/spde-gitbook/
- Inlabru: https://inlabru-org.github.io/inlabru/
Penalizing complexity priors:
- Penalizing Model Component Complexity: A Principled, Practical Approach to Constructing Priors: https://doi.org/10.1214/16-STS576
- AR processes https://doi.org/10.1111/jtsa.12242
- Gaussian fields https://doi.org/10.1080/01621459.2017.1415907
- Skew-normal model https://doi.org/10.57805/revstat.v19i1.328
- Weibull model https://doi.org/10.1016/j.spl.2021.109098
- Splines https://arxiv.org/abs/1511.05748
- Many more available in the R-INLA library: inla.pc
Transcript
This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
159 episodes