- -

Un sistema de información 4D para la exploración de imágenes y mapas multitemporales utilizando fotogrametría, tecnologías web y VR/AR

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Un sistema de información 4D para la exploración de imágenes y mapas multitemporales utilizando fotogrametría, tecnologías web y VR/AR

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Maiwald, Ferdinand es_ES
dc.contributor.author Bruschke, Jonas es_ES
dc.contributor.author Lehmann, Christoph es_ES
dc.contributor.author Niebling, Florian es_ES
dc.date.accessioned 2019-07-26T09:45:03Z
dc.date.available 2019-07-26T09:45:03Z
dc.date.issued 2019-07-25
dc.identifier.uri http://hdl.handle.net/10251/124242
dc.description.abstract [EN] This contribution shows the comparison, investigation, and implementation of different access strategies on multimodal data. The first part of the research is structured as a theoretical part opposing and explaining the terms of conventional access, virtual archival access, and virtual museums while additionally referencing related work. Especially, issues that still persist in repositories like the ambiguity or missing of metadata is pointed out. The second part explains the practical implementation of a workflow from a large image repository to various four-dimensional applications. Mainly, the filtering of images and in the following, the orientation of images is explained. Selection of the relevant images is partly done manually but also with the use of deep convolutional neural networks for image classification. In the following, photogrammetric methods are used for finding the relative orientation between image pairs in a projective frame. For this purpose, an adapted Structure from Motion (SfM) workflow is presented, in which the step of feature detection and matching is replaced by the Radiant-Invariant Feature Transform (RIFT) and Matching On Demand with View Synthesis (MODS). Both methods have been evaluated on a benchmark dataset and performed superior than other approaches. Subsequently, the oriented images are placed interactively and in the future automatically in a 4D browser application showing images, maps, and building models Further usage scenarios are presented in several Virtual Reality (VR) and Augmented Reality (AR) applications. The new representation of the archival data enables spatial and temporal browsing of repositories allowing the research of innovative perspectives and the uncovering of historical details.Highlights:Strategies for a completely automated workflow from image repositories to four-dimensional (4D) access approaches.The orientation of historical images using adapted and evaluated feature matching methods.4D access methods for historical images and 3D models using web technologies and Virtual Reality (VR)/Augmented Reality (AR). es_ES
dc.description.abstract [ES] Esta contribución muestra la comparación, investigación e implementación de diferentes estrategias de acceso a datos multimodales. La primera parte de la investigación se estructura en una parte teórica en la que se oponen y explican los términos de acceso convencional, acceso a los archivos virtuales, y museos virtuales, a la vez que se hace referencia a trabajos relacionados. En especial, se señalan los problemas que aún persisten en los repositorios, como la ambigüedad o la falta de metadatos. La segunda parte explica la implementación práctica de un flujo de trabajo desde un gran repositorio de imágenes a varias aplicaciones en cuatro dimensiones (4D). Principalmente, se explica el filtrado de imágenes y, a continuación, la orientación de las mismas. La selección de las imágenes relevantes se hace en parte manualmente, pero también con el uso de redes neuronales convolucionales profundas para la clasificación de las imágenes. A continuación, se utilizan métodos fotogramétricos para encontrar la orientación relativa entre pares de imágenes en un marco proyectivo. Para ello, se presenta un flujo de trabajo adaptado a partir de Structure from Motion, (SfM), en el que el paso de la detección y la correspondencia de entidades es sustituido por la Transformación de entidades invariante a la radiancia (Radiant-Invariant Feature Transform, RIFT) y la Correspondencia a demanda con vistas sintéticas (Matching on Demand with View Synthesis, MODS). Ambos métodos han sido evaluados sobre la base de un conjunto de datos de referencia y funcionaron mejor que otros procedimientos. Posteriormente, las imágenes orientadas se colocan interactivamente y en el futuro automáticamente en una aplicación de navegador 4D que muestra imágenes, mapas y modelos de edificios. Otros escenarios de uso se presentan en varias aplicación es de Realidad Virtual (RV) y Realidad Aumentada (RA). La nueva representación de los datos archivados permite la navegación espacial y temporal de los repositorios, lo que permite la investigación en perspectivas innovadoras y el descubrimiento de detalles históricos. es_ES
dc.description.sponsorship The research upon which this paper is based is part of the junior research group UrbanHistory4D’s activities which has received funding from the German Federal Ministry of Education and Research under grant agreement No 01UG1630. This work was supported by the German Federal Ministry of Education and Research (BMBF, 01IS18026BA-F) by funding the competence center for Big Data “ScaDS Dresden/Leipzig”. es_ES
dc.language Inglés es_ES
dc.publisher Universitat Politècnica de València
dc.relation.ispartof Virtual Archaeology Review
dc.rights Reconocimiento - No comercial - Sin obra derivada (by-nc-nd) es_ES
dc.subject Fotografías y mapas históricos es_ES
dc.subject Fotogrametría es_ES
dc.subject Repositorio es_ES
dc.subject Documentación es_ES
dc.subject Sistema de Información Geográfica (SIG) es_ES
dc.subject Orientación y correspondencia es_ES
dc.subject Historical photographs and maps es_ES
dc.subject Photogrammetry es_ES
dc.subject Orientation es_ES
dc.subject Documentation es_ES
dc.subject GIS es_ES
dc.subject Augmented Reality es_ES
dc.title Un sistema de información 4D para la exploración de imágenes y mapas multitemporales utilizando fotogrametría, tecnologías web y VR/AR es_ES
dc.title.alternative A 4D information system for the exploration of multitemporal images and maps using photogrammetry, web technologies and VR/AR es_ES
dc.type Artículo es_ES
dc.date.updated 2019-07-26T08:16:45Z
dc.identifier.doi 10.4995/var.2019.11867
dc.rights.accessRights Abierto es_ES
dc.description.bibliographicCitation Maiwald, F.; Bruschke, J.; Lehmann, C.; Niebling, F. (2019). Un sistema de información 4D para la exploración de imágenes y mapas multitemporales utilizando fotogrametría, tecnologías web y VR/AR. Virtual Archaeology Review. 10(21):1-13. https://doi.org/10.4995/var.2019.11867 es_ES
dc.description.accrualMethod SWORD es_ES
dc.relation.publisherversion https://doi.org/10.4995/var.2019.11867 es_ES
dc.description.upvformatpinicio 1 es_ES
dc.description.upvformatpfin 13 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.description.volume 10
dc.description.issue 21
dc.identifier.eissn 1989-9947
dc.description.references Ackerman, A., & Glekas, E. (2017). Digital Capture and Fabrication Tools for Interpretation of Historic Sites. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-2/W2, 107-114. doi:10.5194/isprs-annals-IV-2-W2-107-2017 es_ES
dc.description.references Armingeon, M., Komani, P., Zanwar, T., Korkut, S., & Dornberger, R. (2019). A Case Study: Assessing Effectiveness of the Augmented Reality Application in Augusta Raurica Augmented Reality and Virtual Reality (pp. 99-111): Springer. es_ES
dc.description.references Artstor. (2019). Artstor Digital Library. Retrieved April 30, 2019, from https://library.artstor.org es_ES
dc.description.references Bay, H., Tuytelaars, T., & Van Gool, L. (2006). SURF: Speeded Up Robust Features. Paper presented at the European Conference on Computer Vision, Berlin, Heidelberg. es_ES
dc.description.references Beaudoin, J. E., & Brady, J. E. (2011). Finding visual information: a study of image resources used by archaeologists, architects, art historians, and artists. Art Documentation: Journal of the Art Libraries Society of North America, 30(2), 24-36. es_ES
dc.description.references Beltrami, C., Cavezzali, D., Chiabrando, F., Iaccarino Idelson, A., Patrucco, G., & Rinaudo, F. (2019). 3D Digital and Physical Reconstruction of a Collapsed Dome using SFM Techniques from Historical Images. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W11, 217-224. doi:10.5194/isprs-archives-XLII-2-W11-217-2019 es_ES
dc.description.references Bevilacqua, M. G., Caroti, G., Piemonte, A., & Ulivieri, D. (2019). Reconstruction of lost Architectural Volumes by Integration of Photogrammetry from Archive Imagery with 3-D Models of the Status Quo. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W9, 119-125. doi:10.5194/isprs-archives-XLII-2-W9-119-2019 es_ES
dc.description.references Bitelli, G., Dellapasqua, M., Girelli, V. A., Sbaraglia, S., & Tinia, M. A. (2017). Historical Photogrammetry and Terrestrial Laser Scanning for the 3d Virtual Reconstruction of Destroyed Structures: A Case Study in Italy. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-5/W1, 113-119. doi:10.5194/isprs-archives-XLII-5-W1-113-2017 es_ES
dc.description.references Bruschke, J., Niebling, F., Maiwald, F., Friedrichs, K., Wacker, M., & Latoschik, M. E. (2017). Towards browsing repositories of spatially oriented historic photographic images in 3D web environments. Paper presented at the Proceedings of the 22nd International Conference on 3D Web Technology. es_ES
dc.description.references Bruschke, J., Niebling, F., & Wacker, M. (2018). Visualization of Orientations of Spatial Historical Photographs. Paper presented at the Eurographics Workshop on Graphics and Cultural Heritage. es_ES
dc.description.references Bruschke, J., & Wacker, M. (2014). Application of a Graph Database and Graphical User Interface for the CIDOC CRM. Paper presented at the Access and Understanding-Networking in the Digital Era. Session J1. The 2014 annual conference of CIDOC, the International Committee for Documentation of ICOM. es_ES
dc.description.references Burdea, G. C., & Coiffet, P. (2003). Virtual reality technology: John Wiley & Sons. es_ES
dc.description.references Callieri, M., Cignoni, P., Corsini, M., & Scopigno, R. (2008). Masked photo blending: Mapping dense photographic data set on high-resolution sampled 3D models. Computers & Graphics, 32(4), 464-473. es_ES
dc.description.references Chum, O., & Matas, J. (2005). Matching with PROSAC-progressive sample consensus. Paper presented at the Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. es_ES
dc.description.references Coordination and Support Action Virtual Multimodal Museum (ViMM). (2018). ViMM. Retrieved April 30, 2019, from https://www.vi-mm.eu/ es_ES
dc.description.references CultLab3D. (2019). CultLab3D. Retrieved April 30, 2019, from https://www.cultlab3d.de es_ES
dc.description.references Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. Paper presented at the 2009 IEEE conference on computer vision and pattern recognition. es_ES
dc.description.references Deutsches Archäologisches Institut (DAI). (2019). iDAI.objects arachne (Arachne). Retrieved April 30, 2019, from https://arachne.dainst.org/ es_ES
dc.description.references Efron, B., & Tibshirani, R. J. (1994). An introduction to the bootstrap: CRC press. es_ES
dc.description.references Europeana. (2019). Europeana Collections. Retrieved 30.04.2019, from https://www.europeana.eu es_ES
dc.description.references Evens, T., & Hauttekeete, L. (2011). Challenges of digital preservation for cultural heritage institutions. Journal of Librarianship and Information Science, 43(3), 157-165. es_ES
dc.description.references Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381-395. es_ES
dc.description.references Fleming‐May, R. A., & Green, H. (2016). Digital innovations in poetry: Practices of creative writing faculty in online literary publishing. Journal of the Association for Information Science and Technology, 67(4), 859-873. es_ES
dc.description.references Franken, T., Dellepiane, M., Ganovelli, F., Cignoni, P., Montani, C., & Scopigno, R. (2005). Minimizing user intervention in registering 2D images to 3D models. The visual computer, 21(8-10), 619-628. es_ES
dc.description.references Girardi, G., von Schwerin, J., Richards-Rissetto, H., Remondino, F., & Agugiaro, G. (2013). The MayaArch3D project: A 3D WebGIS for analyzing ancient architecture and landscapes. Literary and Linguistic Computing, 28(4), 736-753. doi:10.1093/llc/fqt059 es_ES
dc.description.references Grussenmeyer, P., & Al Khalil, O. (2017). From Metric Image Archives to Point Cloud Reconstruction: Case Study of the Great Mosque of Aleppo in Syria. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W5, 295-301. doi:10.5194/isprs-archives-XLII-2-W5-295-2017 es_ES
dc.description.references Gutierrez, M., Vexo, F., & Thalmann, D. (2008). Stepping into virtual reality: Springer Science & Business Media. es_ES
dc.description.references Guttentag, D. A. (2010). Virtual reality: Applications and implications for tourism. Tourism Management, 31(5), 637-651. es_ES
dc.description.references Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision: Cambridge university press. es_ES
dc.description.references Koutsoudis, A., Arnaoutoglou, F., Tsaouselis, A., Ioannakis, G., & Chamzas, C. (2015). Creating 3D Replicas of Medium-to Large-Scale Monuments for Web-Based Dissemination Within the Framework of the 3D-Icons Project. CAA2015, 971. es_ES
dc.description.references Li, J., Hu, Q., & Ai, M. (2018). RIFT: Multi-modal Image Matching Based on Radiation-invariant Feature Transform. arXiv preprint arXiv:1804.09493. es_ES
dc.description.references Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2), 91-110. es_ES
dc.description.references Maietti, F., Di Giulio, R., Piaia, E., Medici, M., & Ferrari, F. (2018). Enhancing Heritage fruition through 3D semantic modelling and digital tools: the INCEPTION project. Paper presented at the IOP Conference Series: Materials Science and Engineering. es_ES
dc.description.references Maiwald, F., Schneider, D., Henze, F., Münster, S., & Niebling, F. (2018). Feature Matching of Historical Images Based on Geometry of Quadrilaterals. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2, 643-650. doi:10.5194/isprs-archives-XLII-2-643-2018 es_ES
dc.description.references Maiwald, F., Vietze, T., Schneider, D., Henze, F., Münster, S., & Niebling, F. (2017). Photogrammetric analysis of historical image repositories for virtual reconstruction in the field of digital humanities. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 42, 447. es_ES
dc.description.references Matas, J., Chum, O., Urban, M., & Pajdla, T. (2004). Robust wide-baseline stereo from maximally stable extremal regions. Image and Vision Computing, 22(10), 761-767. es_ES
dc.description.references Melero, F. J., Revelles, J., & Bellido, M. L. (2018). Atalaya3D: making universities' cultural heritage accessible through 3D technologies. es_ES
dc.description.references Milgram, P., Takemura, H., Utsumi, A., & Kishino, F. (1995). Augmented reality: A class of displays on the reality-virtuality continuum. Paper presented at the Telemanipulator and telepresence technologies. es_ES
dc.description.references Mishkin, D., Matas, J., & Perdoch, M. (2015). MODS: Fast and robust method for two-view matching. Computer Vision and Image Understanding, 141, 81-93. es_ES
dc.description.references Moulon, P., Monasse, P., & Marlet, R. (2012). Adaptive structure from motion with a contrario model estimation. Paper presented at the Asian Conference on Computer Vision. es_ES
dc.description.references Münster, S., Kamposiori, C., Friedrichs, K., & Kröber, C. (2018). Image libraries and their scholarly use in the field of art and architectural history. International journal on digital libraries, 19(4), 367-383. es_ES
dc.description.references Niebling, F., Bruschke, J., & Latoschik, M. E. (2018). Browsing Spatial Photography for Dissemination of Cultural Heritage Research Results using Augmented Models. es_ES
dc.description.references Niebling, F., Maiwald, F., Barthel, K., & Latoschik, M. E. (2017). 4D Augmented City Models, Photogrammetric Creation and Dissemination Digital Research and Education in Architectural Heritage (pp. 196-212). Cham: Springer International Publishing. es_ES
dc.description.references Oliva, L. S., Mura, A., Betella, A., Pacheco, D., Martinez, E., & Verschure, P. (2015). Recovering the history of Bergen Belsen using an interactive 3D reconstruction in a mixed reality space the role of pre-knowledge on memory recollection. Paper presented at the 2015 Digital Heritage. es_ES
dc.description.references Pani Paudel, D., Habed, A., Demonceaux, C., & Vasseur, P. (2015). Robust and optimal sum-of-squares-based point-to-plane registration of image sets and structured scenes. Paper presented at the Proceedings of the IEEE International Conference on Computer Vision. es_ES
dc.description.references Ross, S., & Hedstrom, M. (2005). Preservation research and sustainable digital libraries. International journal on digital libraries, 5(4), 317-324. es_ES
dc.description.references Schindler, G., & Dellaert, F. (2012). 4D Cities: Analyzing, Visualizing, and Interacting with Historical Urban Photo Collections. Journal of Multimedia, 7(2), 124-131. es_ES
dc.description.references Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. Paper presented at the Proceedings of the IEEE International Conference on Computer Vision. es_ES
dc.description.references Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. es_ES
dc.description.references Slater, M., & Sanchez-Vives, M. V. (2016). Enhancing our lives with immersive virtual reality. Frontiers in Robotics and AI, 3, 74. es_ES
dc.description.references Styliani, S., Fotis, L., Kostas, K., & Petros, P. (2009). Virtual museums, a survey and some issues for consideration. Journal of cultural Heritage, 10(4), 520-528. es_ES
dc.description.references Tschirschwitz, F., Büyüksalih, G., Kersten, T., Kan, T., Enc, G., & Baskaraca, P. (2019). Virtualising an Ottoman Fortress - Laser Scanning and 3D Modelling for the Development of an Interactive, Immersive Virtual Reality Application. International archives of the photogrammetry, remote sensing and spatial information sciences, 42(2/W9). es_ES
dc.description.references Web3D Consortium. (2019). Open Standards for Real-Time 3D Communication. Retrieved April 30, 2019, from http://www.web3d.org/ es_ES
dc.description.references Wu, C. (2013). Towards linear-time incremental structure from motion. Paper presented at the 3D Vision-3DV 2013, 2013 International conference on. es_ES
dc.description.references Wu, Y., Ma, W., Gong, M., Su, L., & Jiao, L. (2015). A Novel Point-Matching Algorithm Based on Fast Sample Consensus for Image Registration. IEEE Geosci. Remote Sensing Lett., 12(1), 43-47. es_ES
dc.description.references Yoon, J., & Chung, E. (2011). Understanding image needs in daily life by analyzing questions in a social Q&A site. Journal of the American Society for Information Science and Technology, 62(11), 2201-2213. es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem