官术网_书友最值得收藏!

Training through adversarial play in GANs

In a GAN, the networks are trained through adversarial play: both networks compete against each other. As an example, let's assume that we want the GAN to create forgeries of artworks:

  1. The first network, the generator, has never seen the real artwork but is trying to create an artwork that looks like the real thing.
  2. The second network, the discriminator, tries to identify whether an artwork is real or fake.
  3. The generator, in turn, tries to fool the discriminator into thinking that its fakes are the real deal by creating more realistic artwork over multiple iterations.
  4. The discriminator tries to outwit the generator by continuing to refine its own criteria for determining a fake.
  5. They guide each other by providing feedback from the successful changes they make in their own process in each iteration. This process is the training of the GAN.
  6. Ultimately, the discriminator trains the generator to the point at which it can no longer determine which artwork is real and which is fake.

In this game, both networks are trained simultaneously. When we reach a stage at which the discriminator is unable to distinguish between real and fake artworks, the network attains a state known as Nash equilibrium. This will be discussed later on in this chapter.

主站蜘蛛池模板: 江陵县| 舒城县| 兖州市| 迁西县| 麻栗坡县| 鄱阳县| 台湾省| 花垣县| 和林格尔县| 广元市| 乌苏市| 家居| 舒城县| 晋城| 双鸭山市| 温宿县| 沁源县| 兰州市| 曲松县| 晴隆县| 翁源县| 锦屏县| 南昌市| 东港市| 旅游| 博白县| 汉寿县| 东阳市| 衡阳市| 宜阳县| 新巴尔虎右旗| 崇明县| 新疆| 石屏县| 肇庆市| 历史| 临湘市| 保定市| 洪江市| 北宁市| 乐昌市|