官术网_书友最值得收藏!

Discriminator and generator loss

Given the setup that we have described,  and  play a iterative two-player minimax game with the value function :

Literally speaking, it minimizes its loss when D(x) is equal to 1 and D(G(z)) is equal to 0, that is, when the discriminator's probability of real is 1 for real data and 0 for fake data. Hence, the discriminator is maximizing the outputs from D(X) and D(G(z)).

The generator, on the other hand, minimizes its loss when D(G(z)) is equal to 1, that is, when the discriminator's probability of real data as  fake data is 1. Hence, the generator is minimizing the outputs from D(G(z)).

The objective function described in the preceding equation is equivalent to minimizing the Jensen-Shannon (JS) divergence between the distributions, as described in the paper Generative Adversarial Nets by Ian Goodfellow et al. The JS divergence is a symmetric and smoothed version of the Kullback-Leibler (KL) divergence, described in the paper On information and sufficiency by Kullback and Leibler. Note that in the following equation,  stands for divergence, not discriminator:

 on a continuous support is defined as follows:

A closer look at the KL divergence described earlier shows a few problems with respect to exploding losses when the support of P is not contained in Q. In the next section, we will address this topic within the context of the strengths and weaknesses of GANs.

主站蜘蛛池模板: 德庆县| 贵南县| 横峰县| 武定县| 日喀则市| 澜沧| 旬阳县| 博罗县| 浮梁县| 松滋市| 三门峡市| 化州市| 博野县| 扎兰屯市| 天祝| 合山市| 久治县| 庐江县| 榆中县| 绍兴市| 米易县| 府谷县| 东乡| 屯昌县| 林甸县| 高淳县| 溧水县| 丰原市| 株洲县| 德兴市| 高平市| 民乐县| 都匀市| 遵化市| 辽阳市| 高州市| 南充市| 青铜峡市| 叙永县| 虞城县| 周至县|