## Page Not Found

Page not found. Your pixels are in another canvas.

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Page not found. Your pixels are in another canvas.

This is a page not in th emain menu

** Published:**

Bayesian machine learning is all about learning a good representation of very complicated datasets, leveraging cleverly structured models and effective parameter estimation techniques to create a high-dimensional probability distribution approximating the observed data. A key advantage of posing computer vision research under the umbrella of Bayesian inference is that some tasks become really straightforward with the right choice of model.

** Published:**

**Summary: I show how to calculate a multivariate effective sample size after Vats et al. (2019)**. In applied statistics, Markov chain Monte Carlo (MCMC) is now widely used to fit statistical models. Suppose we have a statistical model $p_\theta$ of some dataset $\mathcal{D}$ which has a parameter $\theta$. The basic idea behind MCMC is to estimate $\theta$ by generating $N$ random variates $\theta_i,\theta_2,…$ from the posterior distribution $p(\theta\vert\mathcal{D})$ which are (hopefully) distributed around $\theta$ in a predictable way. The Bayesian central limit theorem states that under the right conditions, $\theta_i$ is normally distributed about the true parameter value $\theta$ with some sample variance $\sigma^2$. We might then want to use the mean of these samples $\hat{\theta}=\sum_i^N \theta_i$ as an estimator of $\theta$ since this *posterior mean* is an optimal estimator in the context of Bayes risk and mean square loss.

** Published:**

Image, text and audio are examples of *structured* multivariate data where we have a total or partial ordering over the entries of our data points and also may exhibit *long-range* structure extending over many pixels, words or seconds of speech. As a consequence, it is difficult to model these kinds of data using models that allow for only short-range structure such as HMMs or which can make use of only pairwise dependency structures such as the covariance matrix in a multivariate normal distribution. What if we’d like to build Bayesian models with more sophisticated structure?

** Published:**

In Bayesian machine learning, deep generative models are providing exciting ways to extend our understanding of optimization and flexible parametric forms to more conventional statistical problems while simultaneously lending insight from probabilistic modeling to AI / ML. This is an exciting time to be studying the topic as it is blending results from probability theory, statistical physics, deep learning and information theory in sometimes surprising ways. This post is a short summary of some of the major work on the subject and serves as an annotated bibliography on the most important developments in the subject. It also uses common notation to help smooth over some of the differences in detail between papers.

Published in *Journal 1*, 2009

This paper is about the number 1. The number 2 is left for future work.

Recommended citation: Your Name, You. (2009). "Paper Title Number 1." *Journal 1*. 1(1). __http://academicpages.github.io/files/paper1.pdf__

Published in *Journal 1*, 2010

This paper is about the number 2. The number 3 is left for future work.

Recommended citation: Your Name, You. (2010). "Paper Title Number 2." *Journal 1*. 1(2). __http://academicpages.github.io/files/paper2.pdf__

Published in *Journal 1*, 2015

This paper is about the number 3. The number 4 is left for future work.

Recommended citation: Your Name, You. (2015). "Paper Title Number 3." *Journal 1*. 1(3). __http://academicpages.github.io/files/paper3.pdf__

** Published:**

This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!

** Published:**

This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.

Undergraduate course, *University 1, Department*, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Workshop, *University 1, Department*, 2015

This is a description of a teaching experience. You can use markdown like any other post.