@prefix config: . @prefix meta: . @prefix rdf: . @prefix rdfs: . @prefix xsd: . @prefix owl: . @prefix dc: . @prefix dcmitype: . @prefix dcterms: . @prefix foaf: . @prefix geo: . @prefix om: . @prefix locn: . @prefix schema: . @prefix skos: . @prefix dbpedia: . @prefix p: . @prefix yago: . @prefix units: . @prefix geonames: . @prefix prv: . @prefix prvTypes: . @prefix doap: . @prefix void: . @prefix ir: . @prefix ou: . @prefix teach: . @prefix time: . @prefix datex: . @prefix aiiso: . @prefix vivo: . @prefix bibo: . @prefix fabio: . @prefix vcard: . @prefix swrcfe: . @prefix frapo: . @prefix org: . @prefix ei2a: . @prefix pto: . ou:vecesCitado "0"; a ou:Publicacion; ou:tipoPublicacion "Conference Paper"; vcard:url ; bibo:doi "10.1109/itsc57777.2023.10421811"; dcterms:publisher "IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC"; dcterms:creator "Rodriguez-Criado D."; ou:urlOrcid ; ou:urlScopus ; bibo:issn "2153-0009"; bibo:page_range "3361-3368"; dcterms:title "Synthesizing Traffic Datasets Using Graph Neural Networks"; ou:eid "2-s2.0-85186532385"; dcterms:contributor "Rodriguez-Criado D., Chli M., Manso L.J., Vogiatzis G."; vivo:identifier "2023-5192"; fabio:hasPublicationYear "2023"; bibo:eissn "2153-0017"; bibo:isbn "9798350399462"; ou:bibtex "@inproceedings{745dc7e98032481d8b402eee2b914d13,\n title = 'Synthesizing Traffic Datasets using Graph Neural Networks',\n abstract = 'Traffic congestion in urban areas presents significant challenges, and Intelligent Transportation Systems (ITS) have sought to address these via automated and adaptive controls. However, these systems often struggle to transfer simulated experiences to real-world scenarios. This paper introduces a novel methodology for bridging this `sim-real' gap by creating photorealistic images from 2D traffic simulations and recorded junction footage. We propose a novel image generation approach, integrating a Conditional Generative Adversarial Network with a Graph Neural Network (GNN) to facilitate the creation of realistic urban traffic images. We harness GNNs' ability to process information at different levels of abstraction alongside segmented images for preserving locality data. The presented architecture leverages the power of SPADE and Graph ATtention (GAT) network models to create images based on simulated traffic scenarios. These images are conditioned by factors such as entity positions, colors, and time of day. The uniqueness of our approach lies in its ability to effectively translate structured and human-readable conditions, encoded as graphs, into realistic images. This advancement contributes to applications requiring rich traffic image datasets, from data augmentation to urban traffic solutions. We further provide an application to test the model's capabilities, including generating images with manually defined positions for various entities.',\n author = 'Daniel Rodriguez-Criado and Maria Chli and Manso, {Luis J.} and George Vogiatzis',\n note = 'Copyright {\\textcopyright} 2023, IEEE. This accepted manuscript is made available under the Creative Commons Attribution-ShareAlike License (https://creativecommons.org/licenses/by-sa/4.0/); 26th IEEE International Conference on Intelligent Transportation Systems, ITSC 2023 ; Conference date: 24-09-2023 Through 28-09-2023',\n year = '2024',\n month = feb,\n day = '13',\n doi = '10.1109/ITSC57777.2023.10421811',\n language = 'English',\n pages = '3361--3368',\n booktitle = 'Proceedings of IEEE 26th International Conference on Intelligent Transportation Systems ITSC',\n publisher = 'IEEE',\n address = 'United States',\n}". ou:tienePublicacion .