Despite the promise of big data, inferences are often limited not by the size of data but rather by its systematic structure. Only by carefully modeling this structure can we take fully advantage of the data -- big data must be complemented with big models and the algorithms that can fit them. Stan is a platform for facilitating this modeling, providing an expressive modeling language for specifying bespoke models and implementing state-of-the-art algorithms to draw subsequent Bayesian inferences.
These modules present a modern perspective on Bayesian modeling, beginning with a principled Bayesian workflow and then progressing to in depth reviews of popular modeling techniques. The course will emphasize interactive exercises run through RStan, the R interface to Stan, and PyStan, the Python interface to Stan.
Each module consists of a live lecture followed by exercises for attendees to complete individually before a final live review session. The lecture and review sessions will take place on Whereby which does not require any registration from attendees.
Both sessions will be recorded and and the recording will be made available to course attendees soon after each session. By participating you consent to having any video and audio you transmit recorded and shared with the rest of the course attendees. Not sharing your video and audio will not prevent you from viewing the sessions or asking questions through the interactive chat.
In between the lecture and review session attendees will be able to discuss the lecture and the exercises with each other on Discord. You should be able to access the course-specific Discord servers without any registration.
The morning and afternoon sessions repeat the same material to increase the opportunity for people to join. Please don't sign up for both!
Module 1: Principled Bayesian Workflow
In this module we review a principled Bayesian workflow that guides the development of statistical models suited to the particular details of a given application. The workflow integrates the development of prior models, computational calibration, inferential calibration, and model critique and model updating.
Lecture: Mon Jul 6 10:00 EDT - 12:00 EDT
Review Session: Thursday July 9 10:00 EDT - 11:30 EDT
Lecture: Mon Jul 6 15:00 EDT - 17:00 EDT
Review Session: Thursday July 9 15:00 EDT - 16:30 EDT
Module 2: Regression Models
This module presents linear and general linear regression techniques from a modeling perspective, using that context to motivate robust implementations. We will especially emphasize principled prior modeling strategies for linear, log, and logistic regression models.
Lecture: Mon Jul 13 10:00 EDT - 12:00 EDT
Review Session: Thursday July 16 10:00 EDT - 11:30 EDT
Lecture: Mon Jul 13 15:00 EDT - 17:00 EDT
Review Session: Thursday July 16 15:00 EDT - 16:30 EDT
Module 3: Hierarchical Models
Module 3 introduces exchangeability and hierarchical models with a strong focus on the inherent identifiability issues and their computational consequences, as well as strategies for moderating this issues.
Completion of Module 2 is highly recommended.
Lecture: Mon Jul 20 10:00 EDT - 12:00 EDT
Review Session: Thursday July 23 10:00 EDT - 11:30 EDT
Lecture: Mon Jul 20 15:00 EDT - 17:00 EDT
Review Session: Thursday July 23 15:00 EDT - 16:30 EDT
Module 4: Multilevel Models
This module introduces conditional exchangeability, marginal exchangeability, and multilevel modeling with a focus on efficient implementations.
Completion of Module 2 and Module 3 is highly recommended.
Lecture: Mon Jul 27 10:00 EDT - 12:00 EDT
Review Session: Thursday July 30 10:00 EDT - 11:30 EDT
Lecture: Mon Jul 27 15:00 EDT - 17:00 EDT
Review Session: Thursday July 30 15:00 EDT - 16:30 EDT
Module 5: Gaussian Process Models
The final module introduces Gaussian processes as a statistical modeling technique, motivating principled prior models that avoid pathological behavior.
Lecture: Mon Aug 3 10:00 EDT - 12:00 EDT
Review Session: Thursday Aug 6 10:00 EDT - 11:30 EDT
Lecture: Mon Aug 3 15:00 EDT - 17:00 EDT
Review Session: Thursday Aug 6 15:00 EDT - 16:30 EDT
The course is aimed at current Stan users and will assume familiarity with the basics of calculus, linear algebra, probability theory, probabilistic modeling and statistical inference, and Stan.
Attendees are strongly encouraged to review
Probability Theory: https://betanalpha.github.io/assets/case_studies/probability_theory.html,
Conditional Probability Theory: https://betanalpha.github.io/assets/case_studies/conditional_probability_theory.html
Probability Theory on Product Spaces: https://betanalpha.github.io/assets/case_studies/probability_on_product_spaces.html
Common Families of Probability Density Functions: https://betanalpha.github.io/assets/case_studies/probability_densities.html.
Probabilistic Modeling and Inference: https://betanalpha.github.io/assets/case_studies/modeling_and_inference.html
Probabilistic Computation: https://betanalpha.github.io/assets/case_studies/probabilistic_computation.html
Markov chain Monte Carlo: https://betanalpha.github.io/assets/case_studies/markov_chain_monte_carlo.html
Introduction to Stan: https://betanalpha.github.io/assets/case_studies/stan_intro.html
In order to participate in the exercises attendees must have a computer with RStan 2.19 and R 4.0 (https://cran.r-project.org/web/packages/rstan/index.html) or PyStan 2.19 and Python 3.7 (http://pystan.readthedocs.io/en/latest/) installed. Please verify that you can run the 8 schools model as discussed in https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started or https://pystan.readthedocs.io/en/latest/getting_started.html and report any installation issues at http://discourse.mc-stan.org as early as possible.
Cancellations will be considered only in the event of emergencies. Those not able to attend the modules due to unexpected scheduling conflicts will be able to follow along with the recordings and Discord discussion groups.
“We had a brilliant 3-day course at trivago with Michael Betancourt! The first day was filled with a very strong theoretical foundation for statistical modelling/decision making, followed by a crash course on MCMC and finished off with practical examples on how to diagnose healthy model fitting. In the 2nd and 3rd days we learned about many different types of hierarchical/multi-level models and spent most of the time practicing how to actually create and fit these models in Stan.
Michael is both a very engaging teacher, a very knowledgeable statistical modeller and, of course, a Stan master. This course has opened up new ways for us at trivago to gain better insights from our data through Stan models that fit our needs.”
-Data Scientists in the Automated Bidding Team, trivago
“The 1-day training course provided a great introduction to Bayesian models and their implementation in the Stan language. The practical focus really helped jumpstart our transition to Bayesian methods, and the slides, recorded lecture, and exercises also provide a great resource for new group members.”
-Stanley Lazic, Associate Director in Statistics and Machine Learning, AstraZeneca
“Stan is the cream of the crop platform for doing Bayesian analysis and is particularly appealing because of its open source nature. The programming language and algorithms are well designed and thought out. With that said, Stan has a very steep learning curve requiring lots of hours to get up to speed on your own. I have been to two training courses taught by Dr. Michael Betancourt and took an opportunity to have some consulting time. These sessions have proven invaluable to improve my use of Stan, increased my learning and usage rate, and informed me how to diagnose and detect issues that will inevitable will arise.”
-Robert Johnson, Corporate R&D, Procter & Gamble
“The workshop at MIT led by Michael Betancourt was a fun and very useful introduction to Stan. Mike worked with us to customize the lectures to our interests, he presented the material in an engaging and accessible way, and the physicists who attended, many of whom had never used Stan before, left with the resources to begin developing our own analyses. Mike’s background in physics makes him an especially effective teacher for scientists. The coding exercises were thoughtfully developed to progress in complexity and were well-integrated into the course; having such useful exercises was critical for participants to successfully internalize the concepts presented in the lectures.”
-Elizabeth Worcester, Associate Physicist, Department of Physics, Brookhaven National Laboratory