- Hands-On Generative Adversarial Networks with Keras
- Rafael Valle
- 527字
- 2021-06-24 14:33:54
Comparing discriminative and generative models
Learning the conditional distribution is easier, because you do not have to make assumptions about the marginal distribution of x or y.
We will use the following diagram to illustrate the differences between discriminative and generative models. We can see two plots with 13 points of a two-dimensional dataset; let's call the blue class labels , and the yellow class labels
:

When training a discriminative model, , we want to estimate the hidden parameters of the model that describe the conditional probability distribution that provides a decision boundary with an optimal split between the classes at hand. When training a generative model,
, we want to estimate the parameters that describe the joint probability distribution of x and y.
In addition to predicting the conditional probability, learning the joint probability distribution allows us to sample the learned model to generate new data from , where
is conditioned on
and
is conditioned on
. In the preceding diagram, for example, you could model the joint probability by learning the hidden parameters of a mixture distribution; for example, a Gaussian mixture with one component per class.
Another way to visualize the difference between generative and discriminative models is to look at a graphical depiction of the distribution that is being modeled. In the following diagram, we can see that the depiction of the discriminative model shows a decision boundary that can be used to define the class label, given some fixed data. In this case, predicting can be seen as finding a decision boundary from which the distance of a datapoint to the boundary is proportional to the probability of that datapoint belonging to a class.
In a binary classification task on a single variable-let's call it -the simplest form of such a model is to find the boundary at which more samples are labeled correctly. In the following figure, the value of
that maximizes the number of correct labels is around 50.
The following depiction of the generative model shows the exact distribution of in the presence and absence of
. Naturally, given that we know the exact distribution of
and
, we can sample it to generate new data:

Since generative models handle the hard task of modeling all dependencies and patterns that are in the input and output data, applications of generative models are uncountable. The deep learning field has produced state-of-the-art generative models for applications such as image generation, speech synthesis, and model-based control.
A fascinating aspect of generative models is that they are potentially capable of learning large and complex data distributions with a relatively small number of parameters. Unlike discriminative models, generative models can learn meaningful features from large, unlabeled datasets, a process that requires little to no labeling or human supervision.
Most recent work in generative models has been focused on GANs and likelihood-based methods, including autoregressive models, Variational Autoencoders (VAEs), and flow-based models. In the following paragraphs, we will describe likelihood-based models and variations thereof. Later, we will describe the GAN framework in detail.
- 并行數據挖掘及性能優化:關聯規則與數據相關性分析
- 手把手教你玩轉RPA:基于UiPath和Blue Prism
- Visual FoxPro 6.0數據庫與程序設計
- IoT Penetration Testing Cookbook
- Expert AWS Development
- Implementing Splunk 7(Third Edition)
- Android游戲開發案例與關鍵技術
- Visual Basic.NET程序設計
- 工業機器人安裝與調試
- 菜鳥起飛系統安裝與重裝
- HBase Essentials
- Photoshop CS4數碼攝影處理50例
- 電腦故障排除與維護終極技巧金典
- EDA技術及其創新實踐(Verilog HDL版)
- Containerization with Ansible 2