Talk:Variational autoencoder
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||
|
Hi, the article is really interesting and well detailed, I believe it will be a really helpful starting point for those who are willing to study this topic. I just fixed some minor things, like a missing comma or repleced a term with a synonim. It would be nice if you could add a paragraph with some applications of this neural network :) --Lavalec (talk) 14:00, 18 June 2021 (UTC)
Hi, I confirm that the article is interesting and detailed. I'm not expert in this field, but I understood the basic things. --Beatrice Lotesoriere (talk) 14:32, 18 June 2021 (UTC)Beatrice Lotesoriere
Very well written article. I just made some minor language changes in a few sections. The only thing I would probably do, I would add some citations in the formulation section. --Wario93 (talk) 15:40, 18 June 2021 (UTC)
Good article, but I had to get rid of a bunch of unnecessary fluff in the Architecture section which obscured the point (diff : http://en.wiki.x.io/w/index.php?title=Variational_autoencoder&type=revision&diff=1040705234&oldid=1039806485 ). 26 August 2021
I disagree, the article really needs attention, it is very hard to understand the "Formulation" part now. I propose the following changes for the first paragraphs, but subsequent ones need revision as well:
From a formal perspective, given an input dataset vector characterized by from an unknown probability function distribution and a multivariate latent encoding vector , the objective is to model the data as a parametric distribution with density , where is a vector of parameters to be learned. defined as the set of the network parameters.
For the parametric model we assume that each is associated with (arises from) a latent encoding vector , and we write to denote their joint density.
It is possible to formalize this distribution as We can then write
where is the evidence of the model's data with marginalization performed over unobserved variables and thus represents the joint distribution between input data and its latent representation according to the network parameters .
193.219.95.139 (talk) 10:18, 2 October 2021 (UTC)
Observations and suggestions for improvements
[edit]The following observations and suggestions for improvements were collected, following expert review of the article within the Science, Tecnology, Society and Wikipedia course at the Politecnico di Milano, in June 2021.
"Minor corrections:
- single layer perceptron => single-layer perceptron
- higher level representations => higher-level representations
- applied with => applied to
- composed by => composed of
- Information retrieval benefits => convoluted sentence
- modelling the relation between => modelling the relationship between
- predicting popularity => predicting the popularity"
Ettmajor (talk) 10:06, 11 July 2021 (UTC)
Does the prior depend on or not?
[edit]In a vanilla Gaussian VAE, the prior follows a standard Gaussian with zero mean and unit variance, i.e., there is no parametrization ( or whatsoever) concerning the prior of the latent representations. On the other hand, the article as well as [Kingma&Welling2014] both parametrize the prior with , just as the likelihood . Clearly, the latter makes sense, since it is the very goal to learn through the probabilistic decoder as generative model for the likelihood . So is there a deeper meaning or sense in parametrizing the prior as as well, with the very same parameters as the likelihood, or is it in fact a typo/mistake? — Preceding unsigned comment added by 46.223.162.38 (talk) 22:11, 11 October 2021 (UTC)
The prior is not dependent on the paramterers , but rather on a different set of parameters . — Preceding unsigned comment added by 134.106.109.104 (talk) 12:22, 14 September 2022 (UTC)
- I also found this incredibly confusing. As the prior on z is usually fixed and doesn't depend on any parameter. EitanPorat (talk) 00:16, 19 March 2023 (UTC)
- I see the confusion. p(z) is a probability distribution, but sometimes the same notation is used in conjunction with a parameter set to indicate that actually it is a parameterized function! The article should be cleared up. The encoder should be called q_phi everywhere and the decoder should be called p_theta. The reason is that to optimize the encoder you need gradients that only come from the KL divergence and then you take the derivative of the free energy with regard to the parameters of the encoder. Those gradients update only the encoder parameters. But the encoder also gets the reconstruction gradients from theta! 46.199.5.20 (talk) 19:47, 26 December 2024 (UTC)
The image shows just a normal autoencoder, not a variational autoencoder
[edit]There is an image with a caption saying it is a variational autoencoder, but it is showing just a plain autoencoder.
In a different section, there is something described as a "trick", which seems to be the central point that distinguishes autoencoders from variational autoencoders.
I'm not sure that image should just be removed, or whether it make sense in the section anyway. Volker Siegel (talk) 14:18, 24 January 2022 (UTC)
- Just to make this point clear: The reparameterization trick is for the gradients! The trick separates the source of randomness to another node in the DAG that does not have any parameters, so that we can propagate gradients through the rest of the DAG that is now a deterministic function. 82.102.110.228 (talk) 18:57, 27 December 2024 (UTC)
This is a highly technical topic
[edit]In the past users have removed much of the technicality involved in the topic. Wikipedia does not have a limit to the depth of technicality, however Simple Wikipedia does. If you find yourself wanting to remove technical depth from the article, please edit the Simple Wikipedia article. 2A01:C23:7C81:1A00:2B9B:EB91:3CC5:3222 (talk) 10:31, 19 November 2022 (UTC)
Overview section is poorly written
[edit]The architecture section is filled with unclear phrases and undefined terms. For example, "noise distribution", "q-distributions or variational posteriors", "p-distributions", "amortized approach", "which is usually intractable" (what is intractable?), "free energy expression". None of these are defined. It is unclear if this section of the article is useful to anyone who is not already familiar with how variational autoencoders work. Joshuame13 (talk) 15:14, 31 January 2023 (UTC)
- I've fixed most of those. The free energy really needs its own section. It is a lower bound that is obtained by using Jensen's inequality on the log likelihood. However, I don't think that Jenssen's inequality is within the scope of this article. 46.199.5.20 (talk) 19:50, 26 December 2024 (UTC)
The ELBO section needs more derivation
[edit]"The form given is not very convenient for maximization, but the following, equivalent form, is:"
There should be more steps to explain how the equivalent form is obtained from the "given" one. Also, the dot placeholder notation is inconsistent, changing from to . PromethiumL (talk) 18:08, 12 February 2023 (UTC)
- I agree p_theta(z) doesn't make sense. EitanPorat (talk) 00:17, 19 March 2023 (UTC)
- Agreed. It should be p_phi(z) or even better q_phi(z). 46.199.5.20 (talk) 20:22, 26 December 2024 (UTC)
Rating this article C-class
[edit]This article has great potential. Excellent technical content. But I just rated it "C" because it seems to have gained both content and noise over the past six months. I've tried for a couple of hours to improve the clarity of the central idea of a VAE, but I'm not satisfied with my efforts. In particular, it is still unclear to me whether both the encoder and decoder are technically random, whether any randomness should be added in the decoder, or what (beyond Z) is modeled with a multimodal Gaussian in the basic VAE. I see no reason why this article should not be accessible both to casual readers and to the technically proficient, but we are far from there yet.
In particular, the introductory figure shows x being mapped to a gaussian figure and back to x'. It would be good to explicitly state how the encoder and decoder in this figure relate to the various distributions used throughout the article, but I'm not confident on how to do so. Yoderj (talk) 19:25, 15 March 2024 (UTC)
I will try to make simple answers to your question. The encoder is a bad name and confuses people. In actuality, it is a gaussian distribution. It has a mean and a variance, which are each parameters given by a neural network. This network is initially random, and is trained (using gradients from the "loss function"). The decoder is also a gaussian distribution. It also has a mean and a variance, which are given by another neural network. Is the decoder technically random? It depends. If you are training, you want to make an estimate. To do the estimate, you have to take samples, which means that the result is random. When you are training, the decoder is random. On the other hand, if you are just doing inference, you can have non-randomness in the decoder. Since you are having a gaussian output, you can say I will do maximum a posteriori and only take one sample from the encoder. You may ask, hey unidentified wikipedian. If it is a gaussian decoder don't you also have to add the variance? That is a very fair question, but there are many applications where you can ignore it. Then you only take one sample out.
I hope that this clears things up. We have four variables. The mean and variance of the encoder. And the mean and variance of the decoder. These variables can be multidimensional for the multivariate Gaussian, but they are still four variables. Here are some equations to help you understand:
z = mu(x) + sigma(x)*epsilon # reparameterization trick
x' = MU(z) + SIGMA(z)*epsilon
And here is the legend:
x: input
z: sample from the latent, aka sample from the encoder, aka output of mu(x) plus output of sigma(x) with randomness
mu, sigma: encoder neural networks
MU, SIGMA: decoder neural networks
x': output
At the end of the day, people have to juggle the interaction of two probability distributions. I doubt that it can be simplified enough for the general populace at this time.