Generating Synthetic Images from Text using RNN and CNN
Abstract
A method called content-to-picture creation aims to generate lifelike images that match text descriptions. These visuals find use in tasks like photo editing. Advanced neural networks like GANs have shown promise in this field. Key considerations include making the images look real and ensuring they match the provided text accurately. Despite recent progress, achieving both realism and content consistency remains challenging. To tackle this, a new model called Bridge GAN is introduced, which creates a bridge between text and images. By combining Bridge GAN with a char CNN – RNN model, the system produces images with high content consistency, surpassing previous methods. In these paper we used we have used FLICKER TEXT and IMAGE dataset. Proposed model performs better than state of art techniques.