S. Rodríguez Santana, B. Zaldívar, D. Hernández Lobato
Implicit Processes (IPs) can be used as a flexible framework to describe a wide variety of models, ranging from Bayesian neural networks to neural samplers, normalizing flows and many others. IPs also allow for approximate inference in function-space, and this formulation solves intrinsic degenerate problems of regular parameter-space approximate Bayesian inference. These issues concern the high number of parameters and their strong dependencies in large models. Because of this, previous works in the literature have attempted to employ IPs both to set up the prior and to approximate the resulting posterior. However, this has proven to be a challenging task, being unable to both (i) tune the prior IP parameters according to data, and (ii) produce flexible predictive distributions (mostly restricted to be Gaussian). We will study previous contributions and showcase some new ideas included in our recently proposed method, which can accomplish both goals simultaneously for the first time.
Keywords: Function-space inference, approximate Bayesian inference, implicit stochastic processes
Scheduled
Bayesian methods
June 7, 2022 4:50 PM
Audiovisual room