FoCM Events
FoCM 2021 Online Seminar Series
Starting in January 2021, this seminar series consisted of two lectures each month by plenary speakers of the conference Foundation of Computational Mathematics 2020, which was planned to take place in Vancouver but was cancelled due to the Covid crisis. Recordings of the talks are available on the FoCM seminar series YouTube channel.
January 29, 2021 [ Recording ]
When faced with a data analysis, learning, or statistical inference problem, the amount and quality of data available fundamentally determines whether such tasks can be performed with certain levels of accuracy. Indeed, many theoretical disciplines study limits of such tasks by investigating whether a dataset effectively contains the information of interest. With the growing size of datasets however, it is crucial not only that the underlying statistical task is possible, but also that is doable by means of efficient algorithms. In this talk we will discuss methods aiming to establish limits of when statistical tasks are possible with computationally efficient methods or when there is a fundamental "Statistical-to-Computational gap" in which an inference task is statistically possible but inherently computationally hard. This is intimately related to understanding the geometry of random functions, with connections to statistical physics, study of spin glasses, random geometry; and in an important example, algebraic invariant theory.
Massive data collection holds the promise of a better understanding of complex phenomena and ultimately, of better decisions. An exciting opportunity in this regard stems from the growing availability of perturbation / intervention data (drugs, knockouts, overexpression, etc.) in biology. In order to obtain mechanistic insights from such data, a major challenge is the development of a causal framework that integrates observational and interventional data to predict the effect of yet unseen interventions or transport the effect of interventions observed in one context to another. I will present a framework for causal structure discovery based on such data by discussing the underlying rich algebraic, combinatorial and geometric structure and also highlighting the role of overparameterized autoencoders. We end by demonstrating how these ideas can be applied for drug repurposing in the current COVID-19 crisis.
February 26, 2021 [ Recording ]
In a randomized controlled trial we randomly assign the treatment, such as a drug or a placebo, that each experimental subject receives. Randomization helps us accurately estimate the difference in treatment effects with high probability. We also want the groups of subjects receiving each treatment to be similar: ideally the two groups would be similar in every statistic we can measure beforehand. Making the groups similar can shrink confidence intervals when the statistics we measure beforehand are correlated with the outcomes we observe.
We show how to use the Gram-Schmidt Walk algorithm of Bansal, Dadush, Garg, and Lovett, to randomly generate assignments of subjects to treatments with near optimal balance. These increase the precision of estimators to the extent that outcomes are linearly correlated with covariates. Moreover, the estimators can never be much worse than those produced by a uniform random assignment in the absence of such correlations.
We will explain randomized controlled trials, the Gram-Schmidt walk algorithm, and the major ideas behind our analyses. This is joint work with Chris Harshaw, Fredrik Sävje, and Peng Zhang.
As we move towards a world which includes quantum computers which exist at scale, we are forced to consider the question of what hard problems in mathematics our next generation of cryptographic systems will be based on. Supersingular Isogeny Graphs were proposed for use in cryptography in 2006 by Charles, Goren, and Lauter. Supersingular Isogeny Graphs are examples of Ramanujan graphs, which are optimal expander graphs. These graphs have the property that relatively short walks on the graph approximate the uniform distribution, and for this reason, walks on expander graphs are often used as a good source of randomness in computer science. But the reason these graphs are important for cryptography is that finding paths in these graphs, i.e. routing, is hard: there are no known subexponential algorithms to solve this problem, either classically or on a quantum computer. For this reason, cryptosystems based on the hardness of problems on Supersingular Isogeny Graphs are currently under consideration for standardization in the NIST Post-Quantum Cryptography (PQC) Competition. This talk will introduce these graphs, the cryptographic applications, and the various algorithmic approaches which have been tried to attack these systems.
March 26, 2021 [ Recording ]
Random matrices were introduced by John Wishart for statistical analysis of large samples. In this talk, I will discuss recent developements in the theory of large deviations in random matrix theory and their applications in statistical learning.
We describe a general framework for analyzing the large N behavior of optimal N-point configurations generated by short-range interactions. Thereby we provide a unified treatment for the asymptotics of minimal energy points for hypersingular Riesz kernels and optimal quantizers for the p-norm. Our approach also yields new results for point configurations having prescribed densities generated by k-nearest neighbor truncated Riesz energy interactions as well as provides an analysis of a popular meshing algorithm due to Persson and Strang. Joint work with D.P. Hardin and O. Vlasiuk.
April 30, 2021 [ Recording ]
Inverse problems are about the reconstruction of an unknown physical quantity from indirect measurements. Most inverse problems of interest are ill-posed and require appropriate mathematical treatment for recovering meaningful solutions. Regularization is one of the main mechanisms to turn inverse problems into well-posed ones by adding prior information about the unknown quantity to the problem, often in the form of assumed regularity of solutions. Classically, such regularization approaches are handcrafted. Examples include Tikhonov regularization, the total variation and several sparsity-promoting regularizers such as the L1 norm of Wavelet coefficients of the solution. While such handcrafted approaches deliver mathematically and computationally robust solutions to inverse problems, providing a universal approach to their solution, they are also limited by our ability to model solution properties and to realise these regularization approaches computationally.
Recently, a new paradigm has been introduced to the regularization of inverse problems, which derives regularization approaches for inverse problems in a data driven way. Here, regularization is not mathematically modelled in the classical sense, but modelled by highly over-parametrised models, typically deep neural networks, that are adapted to the inverse problems at hand by appropriately selected (and usually plenty of) training data.
In this talk, I will review some machine learning based regularization techniques, present some work on unsupervised and deeply learned convex regularisers and their application to image reconstruction from tomographic and blurred measurements, and finish by discussing some open mathematical problems.
The speaker will overview computational issues related to representations of a given graph by a drawing on a surface whose genus is not too large. The task of finding if the graph can be drawn in the plane (the surface of genus 0) is well understood and solvable by fast linear-time algorithms. The talk will survey extensions to surfaces of larger (but bounded) genus and will discuss recent attempts to approximate the minimum possible genus when the graph has many edges.
May 28, 2021 [ Recording ]
The development of new classification and regression algorithms based on deep neural networks coined Deep Learning have had a dramatic impact in the areas of artificial intelligence, machine learning, and data analysis. More recently, these methods have been applied successfully to the numerical solution of partial differential equations (PDEs). However, a rigorous analysis of their potential and limitations is still largely open. In this talk we will survey recent results contributing to such an analysis. In particular I will present recent empirical and theoretical results supporting the capability of Deep Learning based methods to break the curse of dimensionality for several high dimensional PDEs, including nonlinear Black Scholes equations used in computational finance, Hamilton Jacobi Bellman equations used in optimal control, and stationary Schrödinger equations used in quantum chemistry. Despite these encouraging results, it is still largely unclear for which problem classes a Deep Learning based ansatz can be beneficial. To this end I will, in a second part, present recent work establishing fundamental limitations on the computational efficiency of Deep Learning based numerical algorithms that, in particular, confirm a previously empirically observed "theory-to-practice gap".
This is joint work with Geoffrey Chinot, Felix Kuchelmeister and Matthias Löffler.
It is observed in empirical studies that algorithms achieving zero training error can perform well in test sets. We aim at contributing to a theoretical understanding of this phenomenon. We consider the high-dimensional situation where there are many ways to achieve zero training error. We examine the minimum l1-norm interpolator or max-margin classifier in classification, in the special setting of high-dimensional i.i.d. Gaussian design. We study its rate of convergence under l1-sparsity assumptions. Related is the noisy one-bit compressed sensing problem, where we apply the algorithm of Plan and Vershynin [2013] and (re-)establish rates under l0- and l1-sparsity conditions.
Reference:
Y. Plan and R. Vershynin. One-bit compressed sensing by linear programming. Communications on Pure and Applied Mathematics, 66(8):1275–1297, 2013.
June 25, 2021 [ Recording ]
Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, inpainting, compressed sensing, and superresolution all lie in this framework. Recent advances in machine learning and image processing have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional approaches by large margins. In this talk, I will describe the central prevailing themes and tradeoffs of this emerging area. The core ideas underlying these methods draw upon ideas from statistics, optimization, signal processing, and machine learning. We will explore a new class of approaches based on implicit “infinite-depth" networks that yield reconstruction frameworks with provable convergence guarantees and significant empirical performance gains. Finally, I will describe mechanisms for “model adaptation” — that is, given a model trained to solve one inverse problem with a known forward model, we propose novel procedures that adapt the network to a perturbed forward model. This is joint work with Davis Gilton and Greg Ongie.
Statistical inverse problems lead to complex optimisation and/or Monte Carlo sampling problems. Gradient descent and Langevin samplers provide examples of widely used algorithms. In my talk, I will discuss recent results on sampling algorithms, which can be viewed as interacting particle systems, and their mean-field limits. I will highlight the geometric structure of these mean-field equations within the, so called, Otto calculus, that is, a gradient flow structure in the space of probability measures. Affine invariance is an important outcome of recent work on the subject, a property shared by Newton’s method but not by gradient descent or ordinary Langevin samplers. The emerging affine invariant gradient flow structures allow us to discuss coupling-based Bayesian inference methods, such as the ensemble Kalman filter, as well as invariance-of-measure-based inference methods, such as preconditioned Langevin dynamics, within a common mathematical framework. Applications include nonlinear and logistic regression.
July 30, 2021 [ Recording ]
In computer vision, chirality refers to the constraint that for a scene to be visible in a camera, it must lie in front of the camera. Ignoring this important physical constraint there is now a mature algebraic theory of image formation in pinhole cameras. In this talk I will discuss new structural results that arise from respecting chirality. Using real and semialgebraic geometry one can explicitly describe the set of all true images in an arrangement of cameras. The converse question is when given tuples of points are indeed the images of world points in front of cameras. I will give a complete answer in the case of two cameras, uncovering an unexpected connection to the classical theory of cubic surfaces with 27 real lines.
Joint work with Sameer Agarwal, Andrew Pryhuber and Rainer Sinn.
I will talk about recently discovered interpolation formulas that involve the values of a function and its Fourier transform at certain discrete sets of real numbers. These formulas have been used in the proof of universal optimality of the E8 and the Leech lattice, and somewhat surprisingly, their construction relies on results from the theory of automorphic forms. I will also talk about recent results about Fourier uniqueness and non-uniqueness pairs and some related open questions.
Past FoCM Events
Other important events that have been organised with FoCM's involvement are longer periods of study and research, typically focussed on a number of themes within the remit of the society.
Foundations of Computational Mathematics
Fields Institute
Fields Institute, Toronto, Autumn 2009
Click here for more information
Main themes:
Computational Algebraic Geometry and Symbolic Computation
Computational Number Theory
Computational Geometry, Topology, and Dynamics
Complexity and Computability in Real Computation
Optimization Theory
Follow-up Conference:
From Dynamics to Complexity
A conference celebrating the work of Mike Shub
Fields Institute, Toronto, May 2012
Foundations of Computational Mathematics
Hong Kong
Hong Kong, Autumn 1999
Main details
Workshop: Complexity of multivariate problems
Workshop: Complexity of equation solving and algebra
Workshop: Minimal energy problems
Workshop: Wavelets and multiresolution
Foundations of Computational Mathematics
MSRI
MSRI, Berkeley, August-December 1998
Organising committee
Participants
Introductory workshop
Workshop: Solving systems of equations
Workshop: Complexity
Mathematical Sciences Research Institute