Paper
2 May 2024 Sketch2Tooncity: sketch-based city generation using neurosymbolic model
Junya Kanda, Yi He, Haoran Xie, Kazunori Miyata
Author Affiliations +
Proceedings Volume 13164, International Workshop on Advanced Imaging Technology (IWAIT) 2024; 1316424 (2024) https://doi.org/10.1117/12.3018858
Event: International Workshop on Advanced Imaging Technology (IWAIT) 2024, 2024, Langkawi, Malaysia
Abstract
In this study, we propose an efficient city-generation method based on user sketches. The proposed framework combines Conditional Generative Adversarial Networks(cGAN) and procedural modeling, which we call the Neurosymbolic Model. For cGAN training, the data set needs to consist of linked input and output pairs, so first the building of random height is generated using Perlin noise as the training data set. Then, the building contours are extracted by morphological transformation. For training, we use pairs of height maps created from the city data and sketches extracted by morphological transformation. Allowing users to generate diverse and satisfying cities from freehand sketches.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Junya Kanda, Yi He, Haoran Xie, and Kazunori Miyata "Sketch2Tooncity: sketch-based city generation using neurosymbolic model", Proc. SPIE 13164, International Workshop on Advanced Imaging Technology (IWAIT) 2024, 1316424 (2 May 2024); https://doi.org/10.1117/12.3018858
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top