Statistical Learning Seminars

The effects of CoViD-19 pervade through research communities across the globe, causing canceled conferences, postponed research visits, and suspended projects. Like many others, we have sought other opportunities for collaboration in spite of the current state of affairs and have therefore organized this online seminar series in statistical learning.

Format

We use zoom for all the sessions. Upon joining the seminar, you will be placed in a waiting room; please wait for the host to let you in to the meeting.

The seminars are approximately an hour long with anywhere between 20 and 40 minutes allocated to the presentation and the rest for discussion. Sessions are held on a regular basis on Fridays at 15:30 CET. See Previous Talks for recordings, slides, and resources from previous seminars.

https://lu-se.zoom.us/j/65067339175

Mailing List

To receive announcements for upcoming seminars, please join the group at https://groups.google.com/g/statlearnsem.

Calendar Event

Link to calendar event

Upcoming Talks

July 9, 15:30 CET

Zhiqi Bu (University of Pennsylvania)

Title
Characterizing the SLOPE Trade-off: A Variational Perspective and the Donoho–Tanner Limit
Abstract
Sorted l1 regularization has been incorporated into many methods for solving high-dimensional statistical estimation problems, including the SLOPE estimator in linear regression. In this paper, we study how this relatively new regularization technique improves variable selection by characterizing the optimal SLOPE trade-off between the false discovery proportion (FDP) and true positive proportion (TPP) or, equivalently, between measures of type I error and power. Assuming a regime of linear sparsity and working under Gaussian random designs, we obtain an upper bound on the optimal trade-off for SLOPE, showing its capability of breaking the Donoho-Tanner power limit. To put it into perspective, this limit is the highest possible power that the Lasso, which is perhaps the most popular l1-based method, can achieve even with arbitrarily strong effect sizes. Next, we derive a tight lower bound that delineates the fundamental limit of sorted l1 regularization in optimally trading the FDP off for the TPP. Finally, we show that on any problem instance, SLOPE with a certain regularization sequence outperforms the Lasso, in the sense of having a smaller FDP, larger TPP and smaller l2 estimation risk simultaneously. Our proofs are based on a novel technique that reduces a variational calculus problem to a class of infinite-dimensional convex optimization problems and a very recent result from approximate message passing theory.
Paper
Characterizing the SLOPE Trade-off: A Variational Perspective and the Donoho–Tanner Limit

Organization

This seminar series is a joint effort organized by The Department of Mathematics, Wrocław University, The Department of Mathematics, University of Burgundy, and The Department of Statistics, Lund University.

Lund University
University of Burgundy
Wroclaw University