Generative adversarial network based synthetic data training model for lightweight convolutional neural networks.
Ishfaq Hussain RatherSushil KumarPublished in: Multimedia tools and applications (2023)
Inadequate training data is a significant challenge for deep learning techniques, particularly in applications where data is difficult to get, and publicly available datasets are uncommon owing to ethical and privacy concerns. Various approaches, such as data augmentation and transfer learning, are employed to address this problem, which help to some extent in removing this limitation. However, after a certain amount of data augmentation, the quality of the generated data stalls, and transfer learning suffers from the issue of negative transfer. This paper proposes a novel generative adversarial network-based synthetic data training (GAN-ST) model to generate synthetic data for training a lightweight convolutional neural network (CNN). An enhanced generator is proposed to quickly saturate and cover the colour space of the training distribution. The GAN-ST model is based on Deep Convolutional Generative Adversarial Network(s) (DCGAN) and Conditional Generative Adversarial Network(s) (CGAN) models, which consist of an enhanced generator. The study evaluates the accuracy of a CNN model on the MNIST and CIFAR 10 datasets using both original and synthetic data. The results revealed an impressive classifier accuracy on the MNIST dataset, achieving an accuracy of 99.38% on GAN-ST-generated synthetic training data, which is only 0.05% lower than the performance on original data-based training. The classifier performance on the CIFAR dataset is also remarkable, achieving an accuracy of 90.23%. The performance of CNN trained using GAN-ST-based synthetic data is notable, with the most considerable improvement of 0.66% and 7.06%, over a single GAN-based synthetic data training for the MNIST and CIFAR datasets, respectively. By training two GANs independently, the GAN-ST model covers different parts of the original data distribution, resulting in a more diverse and realistic training data set for the classifier. This diverse set of synthetic data, when used to train a CNN, shows better generalization to new data, leading to improved classification accuracy.