The Maximum Likelihood Degree
A graduate course at KTH Stockholm
If you're interested in attending this course, please fill out this short form to get email updates: link
This is a graduate course in algebraic statistics focusing on the maximum likelihood degree (MLD). The MLD is an invariant of algebraic statistical models that measures the algebraic complexity of the maximum likelihood estimation problem on that model. Calculating this number for various families of discrete or continuous models is interesting because it allows exhaustive solutions to this estimation problem.
After reviewing the basics and defining the MLD for discrete and Gaussian algebraic models, the course features practical computing sessions to get practice in handling this invariant and computing with it. The last part of the course is theoretical and aimed at understanding recent research on discrete statistical models with maximum likelihood degree one.
- Nov 2: Introduction – Discrete and Gaussian statistical models, maximum likelihood estimation, algebraic geometry basics. No assignments. [notes]
Nov 9: Algebraic statistical models and their MLD.
Literature: Sullivant, Algebraic Statistics, Section 7.1 without the examples.
Exercise: State and prove a version of Theorem 7.1.2 for algebraic Gaussian models.
Nov 16: Practical session – Computing the MLD. [code]
Literature: The examples in Sullivant, Section 7.1.
Exercise: Sullivant, Section 7.5, Exercise 7.2.
Nov 23: Practical session – Using the MLD for likelihood estimation. [code]
Literature: Sturmfels, Timme, and Zwiernik, Estimating linear covariance models with numerical nonlinear algebra, Section 5. [link]
Exercise: Verify Example 3.5 in the paper using the Julia package LinearCovarianceModels.jl.
Nov 30: Discrete MLD one – Problem setup.
Literature: Huh, Varieties with maximum likelihood degree one, Sections 1.1, 1.2, 3.1, 3.2. [link]
Exercise: When does a subvariety of \((\mathbb C^*)^2\) given as the restriction of a line in \(\mathbb C^2\) have ML degree one?
Dec 7: Discrete MLD one – Horn uniformizations.
Literature: Huh, Sections 3.3, 3.4, 3.5.
Exercise: Adapt the first part of Example 7 to the case of three binary random variables. Give the corresponding scaling vector \(\mathbf d\) and Horn matrix \(B\).
Dec 14: Discrete MLD one – Discriminantal varieties.
Literature: Huh, Sections 1.3, 3.6, 3.7.
Exercise: Adapt the second part of Example 7 to the case of three binary random variables. Describe the corresponding \(A\)-discriminant \(\Delta_A\) and monomial map \(\mathbf d * \phi^B\).
Code: FSF3961 (Selected Topics in Mathematical Statistics)
Teacher: Orlando Marigliano (orlandom at kth.se)
Examiner: Henrik Hult
Place: Room 3721, Lindstedtsvägen 25 [link]
Timespan: Fall term 2021-2022, Reading period 2, every Tuesday 14:15-16:00
Prerequisites: Intermediate algebra, basic probability theory knowledge
Participation and Examination
Participants are expected to have read the literature assigned to each session ahead of time. Additionally, Please prepare for the practical sessions by installing Macaulay2, Julia, and LinearCovarianceModels.jl on your computer and running a basic 1+1=2 computation in both systems ahead of time.
Participants who wish to pass the course should submit written solutions to the homework exercises and present one of their solutions to the group. A solution to an exercise assigned to a session should be handed in before the following session. It can be presented either during its session or in the following session, if there is one.
In case more than six participants wish to pass the course, those remaining can do so by doing an additional short presentation at the end of the course.