Applied and Computational Engineering

- The Open Access Proceedings Series for Conferences


Proceedings of the 4th International Conference on Signal Processing and Machine Learning

Series Vol. 52 , 27 March 2024


Open Access | Article

Investigation of generative capacity related to DCGANs across varied discriminator architectures and parameter counts: A comparative study

Fan Yi * 1
1 Fudan University

* Author to whom correspondence should be addressed.

Applied and Computational Engineering, Vol. 52, 1-7
Published 27 March 2024. © 27 March 2024 The Author(s). Published by EWA Publishing
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Citation Fan Yi. Investigation of generative capacity related to DCGANs across varied discriminator architectures and parameter counts: A comparative study. ACE (2024) Vol. 52: 1-7. DOI: 10.54254/2755-2721/52/20241111.

Abstract

Generating lifelike images through generative models poses a significant challenge, where Generative Adversarial Networks (GANs), particularly Deep Convolutional GANs (DCGANs), are commonly employed for image synthesis. This study focuses on altering the DCGAN discriminator’s structure and parameter count, investigating their effects on the characteristics of the resulting generated images. Assessment of these models is carried out using the Fréchet Inception Distance (FID) score, a metric that gauges the quality of generated image samples. The research specifically involves substituting some convolutional layers with fully-connected layers, and the ensuing outcomes are thoroughly compared to discern the impact of these structural changes. Furthermore, dropout was used to study the number of the parameters’ influence. This study compared the FID score of the models when the probability is 0, 0.2, 0.4, 0.6 and 0.8. Experimental results showed that the DCGAN with the fully-connected layers’ generated ability was stronger than the original one. Besides, when the probability of the dropout is 0.6, the images generated was the most realistic. Finally, the paper explained the possible reasons for the difference and proposed a better generative model based on DCGAN.

Keywords

Generative Adversarial Network, Deep Convolutional Generative Adversarial Network, Discriminator, Dropout

References

1. Creswell A et al 2018 Generative Adversarial Networks: An Overview (IEEE Signal Processing Magazine vol 35 no 1) pp 53-65

2. Kingma D P and Max W 2013 Auto-encoding variational bayes (arXiv preprint) arXiv 1312.6114

3. Van D O et al 2016 Pixel recurrent neural networks (International conference on machine learning) pp 1747-1756

4. Goodfellow I et al 2014 Generative adversarial nets (Advances in neural information processing systems) 27

5. Karras T Laine S Aila T 2019 A style-based generator architecture for generative adversarial networks (Proceedings of the IEEE/CVF conference on computer vision and pattern recognition) pp. 4401-4410

6. Karras T Aila T Laine S et al 2017 Progressive growing of gans for improved quality, stability, and variation (arXiv preprint arXiv) 1710.10196

7. Radford A Metz L and Chintala S 2015 Unsupervised representation learning with deep convolutional generative adversarial networks arXiv preprint arXiv:1511.06434

8. Creswell A White T Dumoulin V et al 2018 Generative adversarial networks: An overview IEEE signal processing magazine 35(1): 53-65.

9. Wang K Gou C Duan Y et al 2017 Generative adversarial networks: introduction and outlook IEEE/CAA Journal of Automatica Sinica 4(4): 588-598.

10. Obukhov A Krasnyanskiy M 2020 Quality assessment method for GAN based on modified metrics inception score and Fréchet inception distance Software Engineering Perspectives in Intelligent Systems: Proceedings of 4th Computational Methods in Systems and Software 2020, Vol. 1 4. Springer International Publishing 102-114

Data Availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Authors who publish this series agree to the following terms:

1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.

2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.

3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open Access Instruction).

Volume Title
Proceedings of the 4th International Conference on Signal Processing and Machine Learning
ISBN (Print)
978-1-83558-349-4
ISBN (Online)
978-1-83558-350-0
Published Date
27 March 2024
Series
Applied and Computational Engineering
ISSN (Print)
2755-2721
ISSN (Online)
2755-273X
DOI
10.54254/2755-2721/52/20241111
Copyright
27 March 2024
Open Access
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

Copyright © 2023 EWA Publishing. Unless Otherwise Stated