Please use this identifier to cite or link to this item: http://dspace.iitrpr.ac.in:8080/xmlui/handle/123456789/3227
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSidheekh, S.-
dc.contributor.authorAimen, A.-
dc.contributor.authorMadan, V.-
dc.contributor.authorKrishnan, N. C.-
dc.date.accessioned2021-11-22T09:47:48Z-
dc.date.available2021-11-22T09:47:48Z-
dc.date.issued2021-11-22-
dc.identifier.urihttp://localhost:8080/xmlui/handle/123456789/3227-
dc.description.abstractGenerative adversarial networks (GANs) are among the most popular deep learning models for learning complex data distributions. However, training a GAN is known to be a challenging task. This is often attributed to the lack of correlation between the training progress and the trajectory of the generator and discriminator losses and the need for the GAN’s subjective evaluation. A recently proposed measure inspired by game theory - the duality gap, aims to bridge this gap. However, as we demonstrate, the duality gap’s capability remains constrained due to limitations posed by its estimation process. This paper presents a theoretical understanding of this limitation and proposes a more dependable estimation process for the duality gap. At the crux of our approach is the idea that local perturbations can help agents in a zero-sum game escape non-Nash saddle points efficiently. Through exhaustive experimentation across GAN models and datasets, we establish the efficacy of our approach in capturing the GAN training progress with minimal increase to the computational complexity. Further, we show that our estimate, with its ability to identify model convergence/divergence, is a potential performance measure that can be used to tune the hyperparameters of a GAN.en_US
dc.language.isoen_USen_US
dc.titleOn duality gap as a measure for monitoring GAN trainingen_US
dc.typeArticleen_US
Appears in Collections:Year-2021

Files in This Item:
File Description SizeFormat 
Full Text.pdf4.52 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.