Posts

Showing posts from January 25, 2019

Confusion in modelling finite mixture model

Image
Clash Royale CLAN TAG #URR8PPP 2 0 $begingroup$ From the book "Machine Learning a probabilistic Perspective", I'm reading about finite/infinite mixture models. Particularly at paragraph 25.2.1 it's stated: The usual representation (of a finite mixture model) is as follows: $p(x_i|z_i = k, boldsymboltheta) = p(x_i|boldsymboltheta_k)$ $p(z_i = k| boldsymbolpi = pi_k) = pi_k$ $p(boldsymbolpi|alpha) =textDir(boldsymbolpi|(alpha/K)boldsymbol1_K)$ The form of $p(boldsymboltheta_k|lambda)$ is chosen to be be conjugate to $p(x_i|boldsymboltheta_k)$ . We can write $p(x_i|boldsymboltheta_k)$ as $boldsymbolx_i sim F(boldsymboltheta_z_i)$ where F is the observation distribution. Similarly, we can write $boldsymboltheta_k sim H(lambda)$ , where H is the prior. Now this modelling is quite confusing to me. What is the difference between $boldsymboltheta_k$ and $boldsymboltheta_z_i$ ? What is meant by "Observation distribution"? Can we apply