publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2025
-
Enhancing LiDAR point cloud generation with BRDF-based appearance modellingAlfonso López, Carlos J. Ogayar, Rafael J. Segura, and 1 more authorISPRS Journal of Photogrammetry and Remote Sensing, Apr 2025This work presents an approach to generating LiDAR point clouds with empirical intensity data on a massively parallel scale. Our primary aim is to complement existing real-world LiDAR datasets by simulating a wide spectrum of attributes, ensuring our generated data can be directly compared to real point clouds. However, our emphasis lies in intensity data, which conventionally has been generated using non-photorealistic shading functions. In contrast, we represent surfaces with Bidirectional Reflectance Distribution Functions (BRDF) obtained through goniophotometer measurements. We also incorporate refractivity indices derived from prior research. Beyond this, we simulate other attributes commonly found in LiDAR datasets, including RGB values, normal vectors, GPS timestamps, semantic labels, instance IDs, and return data. Our simulations extend beyond terrestrial scenarios; we encompass mobile and aerial scans as well. Our results demonstrate the efficiency of our solution compared to other state-of-the-art simulators, achieving an average decrease in simulation time of 85.62%. Notably, our approach introduces greater variability in the generated intensity data, accounting for material properties and variations caused by the incident and viewing vectors. The source code is available on GitHub (https://github.com/AlfonsoLRz/LiDAR_BRDF).
@article{lopez_enhancing_2025, title = {Enhancing {LiDAR} point cloud generation with {BRDF}-based appearance modelling}, volume = {222}, issn = {0924-2716}, url = {https://www.sciencedirect.com/science/article/pii/S0924271625000607}, doi = {10.1016/j.isprsjprs.2025.02.010}, urldate = {2025-03-27}, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, author = {López, Alfonso and Ogayar, Carlos J. and Segura, Rafael J. and Casas-Rosa, Juan C.}, month = apr, year = {2025}, keywords = {LiDAR simulation, Bidirectional reflectance distribution function, Graphics processing unit, Virtual laser scanner}, pages = {79--98}, } -
Virtualized Point Cloud RenderingJosé Antonio Collado, Alfonso López, Juan Manuel Jurado, and 1 more authorIEEE Transactions on Visualization and Computer Graphics, Oct 2025Remote sensing technologies, such as LiDAR, produce billions of points that commonly exceed the storage capacity of the GPU, restricting their processing and rendering. Level of detail (LoD) techniques have been widely investigated, but building the LoD structures is also time-consuming. This study proposes a GPU-driven culling system focused on determining the number of points visible in every frame. It can manipulate point clouds of any arbitrary size while maintaining a low memory footprint in both the CPU and GPU. Instead of organizing point clouds into hierarchical data structures, these are split into groups of points sorted using the Hilbert encoding. This alternative alleviates the occurrence of anomalous groups found in Morton curves. Instead of keeping the entire point cloud in the GPU, points are transferred on demand to ensure real-time capability. Accordingly, our solution can manipulate huge point clouds even in commodity hardware with low memory capacities. Moreover, hole filling is implemented to cover the gaps derived from insufficient density and our LoD system. Our proposal was evaluated with point clouds of up to 18 billion points, achieving an average of 80 frames per second (FPS) without perceptible quality loss. Relaxing memory constraints further enhances visual quality while maintaining an interactive frame rate. We assessed our method on real-world data, comparing it against three state-of-the-art methods, demonstrating its ability to handle significantly larger point clouds.
@article{collado_virtualized_2025, title = {Virtualized {Point} {Cloud} {Rendering}}, volume = {31}, issn = {1941-0506}, url = {https://ieeexplore.ieee.org/document/10972035}, doi = {10.1109/TVCG.2025.3562696}, number = {10}, urldate = {2025-10-03}, journal = {IEEE Transactions on Visualization and Computer Graphics}, author = {Collado, José Antonio and López, Alfonso and Jurado, Juan Manuel and Jiménez, Juan Roberto}, month = oct, year = {2025}, keywords = {Real-time systems, GPGPU, Rendering (computer graphics), acceleration structures, Colored noise, dynamic rendering, GPU-driven, Graphics processing units, Hardware, Image color analysis, Laser radar, level of detail, out-of-core rendering, Pipelines, Point cloud compression, point cloud rendering, point-based models, Random access memory, rasterization, virtual memory system, visibility}, pages = {8026--8039}, }
2024
-
Change Detection in Point Clouds Using 3D Fractal DimensionJuan C. Casas-Rosa, Pablo Navarro, Rafael J. Segura-Sánchez, and 5 more authorsRemote Sensing, Jan 2024The management of large point clouds obtained by LiDAR sensors is an important topic in recent years due to the widespread use of this technology in a wide variety of applications and the increasing volume of data captured. One of the main applications of LIDAR systems is the study of the temporal evolution of the real environment. In open environments, it is important to know the evolution of erosive processes or landscape transformation. In the context of civil engineering and urban environments, it is useful for monitoring urban dynamics and growth, and changes during the construction of buildings or infrastructure facilities. The main problem with change detection (CD) methods is erroneous detection due to precision errors or the use of different capture devices at different times. This work presents a method to compare large point clouds, based on the study of the local fractal dimension of point clouds at multiple scales. Our method is robust in the presence of environmental and sensor factors that produce abnormal results with other methods. Furthermore, it is more stable than others in cases where there is no significant displacement of points but there is a local alteration of the structure of the point cloud. Furthermore, the precision can be adapted to the complexity and density of the point cloud. Finally, our solution is faster than other CD methods such as distance-based methods and can run at O(1) under some conditions, which is important when working with large datasets. All these improvements make the proposed method more suitable than the others to solve complex problems with LiDAR data, such as storage, time series data management, visualization, etc.
@article{casas-rosa_change_2024, title = {Change {Detection} in {Point} {Clouds} {Using} {3D} {Fractal} {Dimension}}, volume = {16}, copyright = {http://creativecommons.org/licenses/by/3.0/}, issn = {2072-4292}, url = {https://www.mdpi.com/2072-4292/16/6/1054}, doi = {10.3390/rs16061054}, language = {en}, number = {6}, urldate = {2024-11-15}, journal = {Remote Sensing}, author = {Casas-Rosa, Juan C. and Navarro, Pablo and Segura-Sánchez, Rafael J. and Rueda-Ruiz, Antonio J. and López-Ruiz, Alfonso and Fuertes, José M. and Delrieux, Claudio and Ogayar-Anguita, Carlos J.}, month = jan, year = {2024}, keywords = {LiDAR, box counting, fractal dimension, point cloud comparison}, pages = {1054}, } -
Eye-tracking on virtual reality: a surveyJesús Moreno-Arjonilla, Alfonso López-Ruiz, J. Roberto Jiménez-Pérez, and 2 more authorsVirtual Reality, Feb 2024Virtual reality (VR) has evolved substantially beyond its initial remit of gaming and entertainment, catalyzed by advancements such as improved screen resolutions and more accessible devices. Among various interaction techniques introduced to VR, eye-tracking stands out as a pivotal development. It not only augments immersion but offers a nuanced insight into user behavior and attention. This precision in capturing gaze direction has made eye-tracking instrumental for applications far beyond mere interaction, influencing areas like medical diagnostics, neuroscientific research, educational interventions, and architectural design, to name a few. Though eye-tracking’s integration into VR has been acknowledged in prior reviews, its true depth, spanning the intricacies of its deployment to its broader ramifications across diverse sectors, has been sparsely explored. This survey undertakes that endeavor, offering a comprehensive overview of eye-tracking’s state of the art within the VR landscape. We delve into its technological nuances, its pivotal role in modern VR applications, and its transformative impact on domains ranging from medicine and neuroscience to marketing and education. Through this exploration, we aim to present a cohesive understanding of the current capabilities, challenges, and future potential of eye-tracking in VR, underscoring its significance and the novelty of our contribution.
@article{moreno-arjonilla_eye-tracking_2024, title = {Eye-tracking on virtual reality: a survey}, volume = {28}, issn = {1434-9957}, shorttitle = {Eye-tracking on virtual reality}, url = {https://doi.org/10.1007/s10055-023-00903-y}, doi = {10.1007/s10055-023-00903-y}, language = {en}, number = {1}, urldate = {2024-11-15}, journal = {Virtual Reality}, author = {Moreno-Arjonilla, Jesús and López-Ruiz, Alfonso and Jiménez-Pérez, J. Roberto and Callejas-Aguilera, José E. and Jurado, Juan M.}, month = feb, year = {2024}, keywords = {Virtual reality, Artificial Intelligence, Attention, Eye-tracking, Perception}, pages = {38}, } -
Classification of Grapevine Varieties Using UAV Hyperspectral ImagingAlfonso López, Carlos J. Ogayar, Francisco R. Feito, and 1 more authorRemote Sensing, Jan 2024Classifying grapevine varieties is crucial in precision viticulture, as it allows for accurate estimation of vineyard row growth for different varieties and ensures authenticity in the wine industry. This task can be performed with time-consuming destructive methods, including data collection and analysis in the laboratory. In contrast, unmanned aerial vehicles (UAVs) offer a markedly more efficient and less restrictive method for gathering hyperspectral data, even though they may yield data with higher levels of noise. Therefore, the first task is the processing of these data to correct and downsample large amounts of data. In addition, the hyperspectral signatures of grape varieties are very similar. In this study, we propose the use of a convolutional neural network (CNN) to classify seventeen different varieties of red and white grape cultivars. Instead of classifying individual samples, our approach involves processing samples alongside their surrounding neighborhood for enhanced accuracy. The extraction of spatial and spectral features is addressed with (1) a spatial attention layer and (2) inception blocks. The pipeline goes from data preparation to dataset elaboration, finishing with the training phase. The fitted model is evaluated in terms of response time, accuracy and data separability and is compared with other state-of-the-art CNNs for classifying hyperspectral data. Our network was proven to be much more lightweight by using a limited number of input bands (40) and a reduced number of trainable weights (560 k parameters). Hence, it reduced training time (1 h on average) over the collected hyperspectral dataset. In contrast, other state-of-the-art research requires large networks with several million parameters that require hours to be trained. Despite this, the evaluated metrics showed much better results for our network (approximately 99% overall accuracy), in comparison with previous works barely achieving 81% OA over UAV imagery. This notable OA was similarly observed over satellite data. These results demonstrate the efficiency and robustness of our proposed method across different hyperspectral data sources.
@article{lopez_classification_2024, title = {Classification of {Grapevine} {Varieties} {Using} {UAV} {Hyperspectral} {Imaging}}, volume = {16}, copyright = {http://creativecommons.org/licenses/by/3.0/}, issn = {2072-4292}, url = {https://www.mdpi.com/2072-4292/16/12/2103}, doi = {10.3390/rs16122103}, language = {en}, number = {12}, urldate = {2024-11-15}, journal = {Remote Sensing}, author = {López, Alfonso and Ogayar, Carlos J. and Feito, Francisco R. and Sousa, Joaquim J.}, month = jan, year = {2024}, keywords = {unmanned aerial vehicle, feature extraction, classification, deep learning, hyperspectral, grapevine}, pages = {2103}, } -
Automated detection and tracking of photovoltaic modules from 3D remote sensing dataAndressa Cardoso, David Jurado-Rodríguez, Alfonso López, and 2 more authorsApplied Energy, Aug 2024This study addresses the growing demand for increased performance and reliability of photovoltaic (PV) installations by developing innovative monitoring technologies. The strategy consists of flying an unmanned aerial vehicle (UAV) equipped with a dual camera (RGB and thermal) over the PV plant of interest, followed by the generation of photogrammetric 3D models derived from the overlapped aerial images. The resulting datasets involve orthoimages and point clouds by processing RGB and thermal imagery. The key contribution of this study is twofold: (1) the thermal image mapping on dense and high-resolution point clouds that represent the status and geometry of PV solar modules, and (2) the automatic identification of individual solar panels in 3D space and their thermal characterization along their oriented surface. Then, the vector layer of each PV panel is projected onto the 3D thermal point cloud to extract the thermal values associated with each panel. To evaluate the capability of the proposed method, it was replicated in different scenarios, considering rural and urban environments with different light conditions and PV structures. The results demonstrate the robustness of our method, which achieves a remarkably high detection rate, around 99.12% of true positives, and a low false positive rate, close to 0.88%. Consequently, this method means an advance over previous work by proposing a comprehensive and automated solution for individual and highly detailed monitoring of each solar panel from 3D remotely sensed data. This study opens up new frontier research related to real-time monitoring of photovoltaic modules, an inspection of solar photovoltaic cells, the simulation of solar resources and forecasting, the development of digital twins, solar radiation modelling, and analysis of modular floating solar farms under wave motion.
@article{cardoso_automated_2024, title = {Automated detection and tracking of photovoltaic modules from {3D} remote sensing data}, volume = {367}, issn = {0306-2619}, url = {https://www.sciencedirect.com/science/article/pii/S0306261924006251}, doi = {10.1016/j.apenergy.2024.123242}, urldate = {2024-11-15}, journal = {Applied Energy}, author = {Cardoso, Andressa and Jurado-Rodríguez, David and López, Alfonso and Ramos, M. Isabel and Jurado, Juan Manuel}, month = aug, year = {2024}, keywords = {Thermography, 3D data fusion, Automatic detection, Geographic information system, Solar energy}, pages = {123242}, } -
Generating implicit object fragment datasets for machine learningAlfonso López, Antonio J. Rueda, Rafael J. Segura, and 3 more authorsComputers & Graphics, Dec 2024One of the primary challenges inherent in utilizing deep learning models is the scarcity and accessibility hurdles associated with acquiring datasets of sufficient size to facilitate effective training of these networks. This is particularly significant in object detection, shape completion, and fracture assembly. Instead of scanning a large number of real-world fragments, it is possible to generate massive datasets with synthetic pieces. However, realistic fragmentation is computationally intensive in the preparation (e.g., pre-factured models) and generation. Otherwise, simpler algorithms such as Voronoi diagrams provide faster processing speeds at the expense of compromising realism. In this context, it is required to balance computational efficiency and realism. This paper introduces a GPU-based framework for the massive generation of voxelized fragments derived from high-resolution 3D models, specifically prepared for their utilization as training sets for machine learning models. This rapid pipeline enables controlling how many pieces are produced, their dispersion and the appearance of subtle effects such as erosion. We have tested our pipeline with an archaeological dataset, producing more than 1M fragmented pieces from 1,052 Iberian vessels (Github). Although this work primarily intends to provide pieces as implicit data represented by voxels, triangle meshes and point clouds can also be inferred from the initial implicit representation. To underscore the unparalleled benefits of CPU and GPU acceleration in generating vast datasets, we compared against a realistic fragment generator that highlights the potential of our approach, both in terms of applicability and processing time. We also demonstrate the synergies between our pipeline and realistic simulators, which frequently cannot select the number and size of resulting pieces. To this end, a deep learning model was trained over realistic fragments and our dataset, showing similar results.
@article{lopez_generating_2024, title = {Generating implicit object fragment datasets for machine learning}, volume = {125}, issn = {0097-8493}, url = {https://www.sciencedirect.com/science/article/pii/S0097849324002395}, doi = {10.1016/j.cag.2024.104104}, urldate = {2024-11-15}, journal = {Computers \& Graphics}, author = {López, Alfonso and Rueda, Antonio J. and Segura, Rafael J. and Ogayar, Carlos J. and Navarro, Pablo and Fuertes, José M.}, month = dec, year = {2024}, keywords = {Voxel, Voronoi, Fracture dataset, Fragmentation, GPU programming}, pages = {104104}, } -
Automatic Agrivoltaic Site Selection: a User-Friendly Interface powered by AHP Multicriteria Decision-MakingAndressa Sousa Cardoso, Alfonso López Ruiz, María Isabel Ramos Galán, and 2 more authorsIn 41st European Photovoltaic Solar Energy Conference and Exhibition, Dec 2024This paper presents an efficient approach to automatic agrivoltaic site selection, integrating a user-friendly interface with the Analytic Hierarchy Process (AHP) for multicriteria decision-making and advanced geospatial analysis. The goal is to empower users, including non-experts, with a practical tool for informed decision-making. Users provide raster and vector data from the region of interest and assign importance weights to layers. The software supports a wide range of spatial data, from solar exposure and slope to restricted areas, and allows for dynamic configuration of additional relevant layers. Layers can also include constraints, such as safety distances from structures. The tool optimizes site selection using efficient spatial searches with low computational complexity, while intermediate results are rendered in a graphic application. By evaluating criteria like solar exposure and topography, the application offers a systematic approach to site selection. Preliminary tests show promising results, and the final version is expected to deliver a valuable tool for sustainable agrivoltaic project planning.
@inproceedings{de_sousa_cardoso_automatic_2024, title = {Automatic {Agrivoltaic} {Site} {Selection}: a {User}-{Friendly} {Interface} powered by {AHP} {Multicriteria} {Decision}-{Making}}, isbn = {978-3-936338-90-4}, shorttitle = {Automatic {Agrivoltaic} {Site} {Selection}}, url = {https://userarea.eupvsec.org/proceedings/EU-PVSEC-2024/4do.3.4}, doi = {10.4229/EUPVSEC2024/4DO.3.4}, language = {en}, urldate = {2024-11-15}, booktitle = {41st {European} {Photovoltaic} {Solar} {Energy} {Conference} and {Exhibition}}, publisher = {WIP-Munich}, author = {de Sousa Cardoso, Andressa and López Ruiz, Alfonso and Ramos Galán, María Isabel and Jurado, Juan Manuel and Feito Higueruela, Francisco Ramón}, year = {2024}, keywords = {Dual Use and other Innovative PV Applications, PV Systems Engineering, Integrated/Applied PV}, pages = {020422--001--020422--006}, }
2023
-
Metaheuristics for the optimization of Terrestrial LiDAR set-upAlfonso López, Carlos J. Ogayar, Juan M. Jurado, and 1 more authorAutomation in Construction, Feb 20233D point clouds have a significant impact on a wide range of applications, although their acquisition is frequently conditioned by the occlusion of the objects in the scene. To address this problem, this paper describes an approach for optimizing LiDAR (Light Detection and Ranging) surveys using metaheuristics such as local searches and genetic algorithms. The method generates a set of optimal scanning locations to densely cover the real-world environment represented through 3D synthetic models. Compared to previous research, this paper handles 3D occlusion by varying the height of the sensor. Also, previously used metrics are compressed into three functions to avoid multi-objective optimization. Regarding performance, a LiDAR scanning solution based on GPU (Graphics Processing Unit) hardware is used. Several tests were conducted to show that the combination of local searches and genetic algorithms generates a reduced set of locations capable of optimizing the scanning of buildings.
@article{lopez_metaheuristics_2023, title = {Metaheuristics for the optimization of {Terrestrial} {LiDAR} set-up}, volume = {146}, issn = {0926-5805}, url = {https://www.sciencedirect.com/science/article/pii/S0926580522005453}, doi = {10.1016/j.autcon.2022.104675}, urldate = {2023-12-31}, journal = {Automation in Construction}, author = {López, Alfonso and Ogayar, Carlos J. and Jurado, Juan M. and Feito, Francisco R.}, month = feb, year = {2023}, keywords = {Genetic algorithm, LiDAR, GPGPU, Metaheuristic, Planning for Scanning}, pages = {104675}, } -
Nested spatial data structures for optimal indexing of LiDAR dataCarlos J. Ogayar-Anguita, Alfonso López-Ruiz, Antonio J. Rueda-Ruiz, and 1 more authorISPRS Journal of Photogrammetry and Remote Sensing, Jan 2023In this paper we present a flexible framework for creating spatial data structures to manage LiDAR point clouds in the context of spatial big data. For this purpose, standard approaches typically include the use of a single data structure to index point clouds. Some of them use a hybrid two-tier solution to optimize specific application purposes such as storage or rendering. In this article we introduce a meta-structure that can have unlimited depth and a custom, user-defined combination of nested structures, such as grids, quadtrees, octrees, or kd-trees. With our approach, the out-of-core indexing of point clouds can be adapted to different types of datasets, taking into account the spatial distribution of the data. Therefore, the most suitable spatial indexing can be achieved for any type of dataset, from small TLS-based scenes to planetary-scale ALS-based scenes. This approach allows us to work with overlapping datasets of different resolutions from different acquisition technologies in the same structure.
@article{ogayar-anguita_nested_2023, title = {Nested spatial data structures for optimal indexing of {LiDAR} data}, volume = {195}, issn = {0924-2716}, url = {https://www.sciencedirect.com/science/article/pii/S0924271622003112}, doi = {10.1016/j.isprsjprs.2022.11.018}, urldate = {2023-12-31}, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, author = {Ogayar-Anguita, Carlos J. and López-Ruiz, Alfonso and Rueda-Ruiz, Antonio J. and Segura-Sánchez, Rafael J.}, month = jan, year = {2023}, keywords = {LiDAR, Spatial big data, Spatial data structure, Ubiquitous Point Cloud}, pages = {287--297}, } -
Efficient generation of occlusion-aware multispectral and thermographic point cloudsAlfonso López, Carlos J. Ogayar, Juan M. Jurado, and 1 more authorComputers and Electronics in Agriculture, Apr 2023The reconstruction of 3D point clouds from image datasets is a time-consuming task that has been frequently solved by performing photogrammetric techniques on every data source. This work presents an approach to efficiently build large and dense point clouds from co-acquired images. In our case study, the sensors co-acquire visible as well as thermal and multispectral imagery. Hence, RGB point clouds are reconstructed with traditional methods, whereas the rest of the data sources with lower resolution and less identifiable features are projected into the first one, i.e., the most complete and dense. To this end, the mapping process is accelerated using the Graphics Processing Unit (GPU) and multi-threading in the CPU (Central Processing Unit). The accurate colour aggregation in 3D points is guaranteed by taking into account the occlusion of foreground surfaces. Accordingly, our solution is shown to reconstruct much more dense point clouds than notable commercial software (286% on average), e.g., Pix4Dmapper and Agisoft Metashape, in much less time (−70% on average with respect to the best alternative).
@article{lopez_efficient_2023, title = {Efficient generation of occlusion-aware multispectral and thermographic point clouds}, volume = {207}, issn = {0168-1699}, url = {https://www.sciencedirect.com/science/article/pii/S016816992300100X}, doi = {10.1016/j.compag.2023.107712}, urldate = {2023-12-31}, journal = {Computers and Electronics in Agriculture}, author = {López, Alfonso and Ogayar, Carlos J. and Jurado, Juan M. and Feito, Francisco R.}, month = apr, year = {2023}, keywords = {UAV, GPGPU, Multispectral, 3D point cloud, Thermography}, pages = {107712}, } -
Detection of landscape features with visible and thermal imaging at the Castle of Puerta ArenasCarolina Collaro, Carmen Enríquez-Muñoz, Alfonso López, and 2 more authorsArchaeological and Anthropological Sciences, Sep 2023There are some archaeological sites with hard accessibility which remain unexplored and barely documented. The use of unmanned aerial systems (UAS) alleviates this challenge with aerial observations monitored with distant remote control. In addition to acquiring images in the visible wavelengths, other devices can be coupled on aerial platforms to inspect beyond the remaining structure of an archaeological site. For instance, thermography has proven to be of great help in the detection of buried remains due to observed temperature anomalies. This work explores the Castle of Puerta Arenas fortress to build the first aerial 3D reconstruction of this site by using RGB and thermographic images collected from a UAS. Orthomosaics have been applied to hypothesize about the original shape of the fortress, whereas 3D reconstructions have been rather applied to visualization and analysis. In this regard, the explored remains have been processed as dense point clouds in the visible and long-wave infrared spectrum, with the latter leading to the detection of hypothetical and still unknown towers. The detection of anomalies has been automatized by performing statistical analyses, globally and limited to smaller 3D voxel neighbourhoods. As a result, the studied remains have been documented and observed from an unexplored perspective, helping their conservation and dissemination, as well as suggesting future excavations.
@article{collaro_detection_2023, title = {Detection of landscape features with visible and thermal imaging at the {Castle} of {Puerta} {Arenas}}, volume = {15}, issn = {1866-9565}, url = {https://doi.org/10.1007/s12520-023-01831-3}, doi = {10.1007/s12520-023-01831-3}, language = {en}, number = {10}, urldate = {2023-12-31}, journal = {Archaeological and Anthropological Sciences}, author = {Collaro, Carolina and Enríquez-Muñoz, Carmen and López, Alfonso and Enríquez, Carlos and Jurado, Juan M.}, month = sep, year = {2023}, keywords = {Photogrammetry, Structure from motion, Archaeology, Thermography, Unmanned aerial systems}, pages = {152}, } -
A Version Control System for Point CloudsCarlos J. Ogayar-Anguita, Alfonso López-Ruiz, Rafael J. Segura-Sánchez, and 1 more authorRemote Sensing, Jan 2023This paper presents a novel version control system for point clouds, which allows the complete editing history of a dataset to be stored. For each intermediate version, this system stores only the information that changes with respect to the previous one, which is compressed using a new strategy based on several algorithms. It allows undo/redo functionality in memory, which serves to optimize the operation of the version control system. It can also manage changes produced from third-party applications, which makes it ideal to be integrated into typical Computer-Aided Design workflows. In addition to automated management of incremental versions of point cloud datasets, the proposed system has a much lower storage footprint than the manual backup approach for most common point cloud workflows, which is essential when working with LiDAR (Light Detection and Ranging) data in the context of spatial big data.
@article{ogayar-anguita_version_2023, title = {A {Version} {Control} {System} for {Point} {Clouds}}, volume = {15}, copyright = {http://creativecommons.org/licenses/by/3.0/}, issn = {2072-4292}, url = {https://www.mdpi.com/2072-4292/15/18/4635}, doi = {10.3390/rs15184635}, language = {en}, number = {18}, urldate = {2023-12-31}, journal = {Remote Sensing}, author = {Ogayar-Anguita, Carlos J. and López-Ruiz, Alfonso and Segura-Sánchez, Rafael J. and Rueda-Ruiz, Antonio J.}, month = jan, year = {2023}, keywords = {point cloud, LiDAR, incremental change logs, spatial big data, version control systems}, pages = {4635}, } -
Nuevas herramientas para la modelización de datos procedentes de sensores/new tools form modelling sensor dataAlfonso López RuizJan 2023PhD Dissertation. Publisher: Universidad de Jaén@phdthesis{lopez_ruiz_nuevas_2023, type = {{PhD} {Dissertation}}, title = {Nuevas herramientas para la modelización de datos procedentes de sensores/new tools form modelling sensor data}, author = {López Ruiz, Alfonso}, year = {2023}, note = {{PhD} {Dissertation}. Publisher: Universidad de Jaén}, }
2022
-
Kalathos+: Construcción de datasets para la clasificación automática de fragmentos de vasijas cerámicas de tornoRafael J. Segura, Antonio J. Rueda, Carlos J. Ogáyar, and 6 more authorsJan 2022En los últimos años las técnicas de aprendizaje profundo se han convertido en una poderosa alternativa a técnicas clásicas de aprendizaje automático orientadas a tareas de clasificación. Esta tecnología se ha aplicado con gran éxito en diversos campos, si bien es necesaria la existencia de un conjunto de datos etiquetados muy grande para afrontar el entrenamiento de las mismas. En este trabajo se presenta un framework para la generación automática de fragmentos de vasijas de cerámica de torno, convenientemente etiquetados, de manera que permitan el entrenamiento de CNNs para la clasificación de dichos fragmentos, y su posterior utilización en tareas de reconstrucción.
@book{segura_kalathos_2022, title = {Kalathos+: {Construcción} de datasets para la clasificación automática de fragmentos de vasijas cerámicas de torno}, copyright = {Attribution 4.0 International License}, isbn = {978-3-03868-186-1}, shorttitle = {Kalathos+}, url = {https://diglib.eg.org:443/xmlui/handle/10.2312/ceig20221145}, language = {en}, urldate = {2024-01-02}, publisher = {The Eurographics Association}, author = {Segura, Rafael J. and Rueda, Antonio J. and Ogáyar, Carlos J. and Fuertes, José M. and García-Fernández, Ángel L. and Lucena, Manuel J. and López, Alfonso and Moreno, Isabel and Molinos, Manuel}, year = {2022}, doi = {10.2312/ceig.20221145}, } -
Modeling and Enhancement of LiDAR Point Clouds from Natural ScenariosJosé Antonio Collado, Alfonso López, J. Roberto Jiménez-Pérez, and 3 more authorsIn Eurographics 2022 - Posters, Jan 2022The generation of realistic natural scenarios is a longstanding and ongoing challenge in Computer Graphics. A common source of real-environmental scenarios is open point cloud datasets acquired by LiDAR (Laser Imaging Detection and Ranging) devices. However, these data have low density and are not able to provide sufficiently detailed environments. In this study, we propose a method to reconstruct real-world environments based on data acquired from LiDAR devices that overcome this limitation and generate rich environments, including ground and high vegetation. Additionally, our proposal segments the original data to distinguish among different kinds of trees. The results show that the method is capable of generating realistic environments with the chosen density and including specimens of each of the identified tree types.
@inproceedings{collado_modeling_2022, title = {Modeling and {Enhancement} of {LiDAR} {Point} {Clouds} from {Natural} {Scenarios}}, copyright = {Attribution 4.0 International License}, isbn = {978-3-03868-171-7}, url = {https://diglib.eg.org:443/xmlui/handle/10.2312/egp20221016}, language = {en}, urldate = {2024-01-02}, publisher = {The Eurographics Association}, author = {Collado, José Antonio and López, Alfonso and Jiménez-Pérez, J. Roberto and Ortega, Lidia M. and Feito, Francisco R. and Jurado, Juan Manuel}, year = {2022}, doi = {10.2312/egp.20221016}, booktitle = {Eurographics 2022 - Posters}, } -
GPU-based Mapping of Thermal Imagery for Generating 3D Occlusion-Aware Point CloudsAlfonso López, Juan M. Jurado, Carlos J. Ogayar, and 1 more authorIn IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, Jul 2022This work describes an efficient approach for generating large 3D thermal point clouds considering the occlusion of camera viewpoints. For that purpose, RGB and thermal imagery are first corrected and fused with an intensity correlation-based algorithm. Then, absolute temperature values are obtained from the normalized data. Finally, thermal imagery is mapped on the point cloud using the Graphics Processing Unit (GPU) hardware. The proposed occlusion-aware mapping algorithm is massively parallelized using OpenGL’s compute shaders. Our solution allows generating dense thermal point clouds in a lower response time compared with other notable soft-ware solutions (e.g., Agisoft Metashape or Pix4Dmapper) that yield results with a significantly lower point density.
@inproceedings{lopez_gpu-based_2022, title = {{GPU}-based {Mapping} of {Thermal} {Imagery} for {Generating} {3D} {Occlusion}-{Aware} {Point} {Clouds}}, url = {https://ieeexplore.ieee.org/document/9884240}, doi = {10.1109/IGARSS46834.2022.9884240}, urldate = {2024-01-02}, booktitle = {{IGARSS} 2022 - 2022 {IEEE} {International} {Geoscience} and {Remote} {Sensing} {Symposium}}, author = {López, Alfonso and Jurado, Juan M. and Ogayar, Carlos J. and Feito, Francisco R.}, month = jul, year = {2022}, pages = {1460--1463}, } -
Guided Modeling of Natural Scenarios: Vegetation and TerrainJosé Antonio Collado, Alfonso López, Juan Roberto Jiménez Pérez, and 3 more authorsIn CEIG 2022 - Spanish Computer Graphics Conference, Jul 2022The generation of realistic natural scenarios is a longstanding and ongoing challenge in Computer Graphics. LiDAR (Laser Imaging Detection and Ranging) point clouds have been gaining interest for the representation and analysis of real-world scenarios. However, the output of these sensors is conditioned by several parameters, including, but not limited to, distance to scanning target, aperture angle, number of laser beams, as well as systematic and random errors for the acquisition process. Hence, LiDAR point clouds may present inaccuracies and low density, thus hardening their visualization. In this work, we propose reconstructing the surveyed environments to enhance the point cloud density and provide a 3D representation of the scenario. To this end, ground and vegetation layers are detected and parameterized to allow their reconstruction. As a result, point clouds of any required density can be modeled, as well as 3D realistic natural scenarios that may lead to procedural generation through their parameterization.
@inproceedings{collado_guided_2022, title = {Guided {Modeling} of {Natural} {Scenarios}: {Vegetation} and {Terrain}}, copyright = {Attribution 4.0 International License}, isbn = {978-3-03868-186-1}, shorttitle = {Guided {Modeling} of {Natural} {Scenarios}}, booktitle = {CEIG 2022 - Spanish Computer Graphics Conference}, url = {https://diglib.eg.org:443/handle/10.2312/ceig20221144}, language = {en}, urldate = {2024-01-02}, publisher = {The Eurographics Association}, author = {Collado, José Antonio and López, Alfonso and Pérez, Juan Roberto Jiménez and Ortega, Lidia M. and Jurado, Juan M. and Feito, Francisco}, year = {2022}, doi = {10.2312/ceig.20221144}, } -
A GPU-Accelerated Framework for Simulating LiDAR ScanningAlfonso López, Carlos J. Ogayar, Juan M. Jurado, and 1 more authorIEEE Transactions on Geoscience and Remote Sensing, Jul 2022In this work, we present an efficient graphics processing unit (GPU)-based light detection and ranging (LiDAR) scanner simulator. Laser-based scanning is a useful tool for applications ranging from reverse engineering or quality control at an object scale to large-scale environmental monitoring or topographic mapping. Beyond that, other specific applications require a large amount of LiDAR data during development, such as autonomous driving. Unfortunately, it is not easy to get a sufficient amount of ground-truth data due to time constraints and available resources. However, LiDAR simulation can generate classified data at a reduced cost. We propose a parameterized LiDAR to emulate a wide range of sensor models from airborne to terrestrial scanning. OpenGL’s compute shaders are used to massively generate beams emitted by the virtual LiDAR sensors and solve their collision with the surrounding environment even with multiple returns. Our work is mainly intended for the rapid generation of datasets for neural networks, consisting of hundreds of millions of points. The conducted tests show that the proposed approach outperforms a sequential LiDAR scanning. Its capabilities for generating huge labeled datasets have also been shown to improve previous studies.
@article{lopez_gpu-accelerated_2022, title = {A {GPU}-{Accelerated} {Framework} for {Simulating} {LiDAR} {Scanning}}, volume = {60}, issn = {1558-0644}, url = {https://ieeexplore.ieee.org/document/9751040}, doi = {10.1109/TGRS.2022.3165746}, urldate = {2023-12-31}, journal = {IEEE Transactions on Geoscience and Remote Sensing}, author = {López, Alfonso and Ogayar, Carlos J. and Jurado, Juan M. and Feito, Francisco R.}, year = {2022}, pages = {1--18}, } -
Strategies for the Storage of Large LiDAR Datasets—A Performance ComparisonJuan A. Béjar-Martos, Antonio J. Rueda-Ruiz, Carlos J. Ogayar-Anguita, and 2 more authorsRemote Sensing, Jan 2022The widespread use of LiDAR technologies has led to an ever-increasing volume of captured data that pose a continuous challenge for its storage and organization, so that it can be efficiently processed and analyzed. Although the use of system files in formats such as LAS/LAZ is the most common solution for LiDAR data storage, databases are gaining in popularity due to their evident advantages: centralized and uniform access to a collection of datasets; better support for concurrent retrieval; distributed storage in database engines that allows sharding; and support for metadata or spatial queries by adequately indexing or organizing the data. The present work evaluates the performance of four popular NoSQL and relational database management systems with large LiDAR datasets: Cassandra, MongoDB, MySQL and PostgreSQL. To perform a realistic assessment, we integrate these database engines in a repository implementation with an elaborate data model that enables metadata and spatial queries and progressive/partial data retrieval. Our experimentation concludes that, as expected, NoSQL databases show a modest but significant performance difference in favor of NoSQL databases, and that Cassandra provides the best overall database solution for LiDAR data.
@article{bejar-martos_strategies_2022, title = {Strategies for the {Storage} of {Large} {LiDAR} {Datasets}—{A} {Performance} {Comparison}}, volume = {14}, copyright = {http://creativecommons.org/licenses/by/3.0/}, issn = {2072-4292}, url = {https://www.mdpi.com/2072-4292/14/11/2623}, doi = {10.3390/rs14112623}, language = {en}, number = {11}, urldate = {2023-12-31}, journal = {Remote Sensing}, author = {Béjar-Martos, Juan A. and Rueda-Ruiz, Antonio J. and Ogayar-Anguita, Carlos J. and Segura-Sánchez, Rafael J. and López-Ruiz, Alfonso}, month = jan, year = {2022}, keywords = {LiDAR, point clouds, databases, NoSQL}, pages = {2623}, } -
Generation of hyperspectral point clouds: Mapping, compression and renderingAlfonso López, Juan M. Jurado, J. Roberto Jiménez-Pérez, and 1 more authorComputers & Graphics, Aug 2022Hyperspectral data are being increasingly used for the characterization and understanding of real-world scenarios. In this field, UAV-based sensors bring the opportunity to collect multiple samples from different viewpoints. Thus, light-material interactions of real objects may be observed in outdoor scenarios with a significant spatial resolution (5 cm/pixel). Nevertheless, the generation of hyperspectral 3D data still poses challenges in post-processing due to the high geometric deformation of images. Most of the current solutions use both LiDAR (Light Detection and Ranging) and hyperspectral sensors, which are integrated into the same acquisition system. However, these present several limitations due to errors derived from inertial measurements for data fusion and the spatial resolution according to the LiDAR capabilities. In this work, a method is proposed for the generation of hyperspectral point clouds. Input data are formed by push-broom hyperspectral images and 3D point clouds. On the one hand, the point clouds may be obtained by applying a typical photogrammetric workflow or LiDAR technology. Then, hyperspectral images are geometrically corrected and aligned with the RGB orthomosaic. Accordingly, hyperspectral data are ready to be mapped on the 3D point cloud. This procedure is implemented on the GPU by testing which points are visible for each pixel of the hyperspectral imagery. This work also provides a novel solution to generate, compress and render 3D hyperspectral point clouds, enabling the study of geometry and the hyperspectral response of natural and artificial materials in the real world.
@article{lopez_generation_2022, title = {Generation of hyperspectral point clouds: {Mapping}, compression and rendering}, volume = {106}, issn = {0097-8493}, shorttitle = {Generation of hyperspectral point clouds}, url = {https://www.sciencedirect.com/science/article/pii/S0097849322001145}, doi = {10.1016/j.cag.2022.06.011}, urldate = {2023-12-31}, journal = {Computers \& Graphics}, author = {López, Alfonso and Jurado, Juan M. and Jiménez-Pérez, J. Roberto and Feito, Francisco R.}, month = aug, year = {2022}, keywords = {Massively parallel algorithms, Hyperspectral, Compression, Rendering}, pages = {267--276}, } -
Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestryJuan M. Jurado, Alfonso López, Luís Pádua, and 1 more authorInternational Journal of Applied Earth Observation and Geoinformation, Aug 2022Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This path is already being taken by the recent and fast-developing research in computational fields, however, some issues related to computationally expensive processes in the integration of multi-source sensing data remain. Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope, many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields concentrate most applications and are widely studied. Many challenges are currently being overcome by recent methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are presented.
@article{jurado_remote_2022, title = {Remote sensing image fusion on {3D} scenarios: {A} review of applications for agriculture and forestry}, volume = {112}, issn = {1569-8432}, shorttitle = {Remote sensing image fusion on {3D} scenarios}, url = {https://www.sciencedirect.com/science/article/pii/S1569843222000589}, doi = {10.1016/j.jag.2022.102856}, urldate = {2023-12-31}, journal = {International Journal of Applied Earth Observation and Geoinformation}, author = {Jurado, Juan M. and López, Alfonso and Pádua, Luís and Sousa, Joaquim J.}, month = aug, year = {2022}, keywords = {3D Modeling, Data Fusion, Image Mapping, Survey}, pages = {102856}, } -
Reconstruction of tree branching structures from UAV-LiDAR dataJosé L. Cárdenas, Alfonso López, Carlos J. Ogayar, and 2 more authorsFrontiers in Environmental Science, Aug 2022The reconstruction of tree branching structures is a longstanding problem in Computer Graphics which has been studied over several data sources, from photogrammetry point clouds to Terrestrial and Aerial Laser Imaging Detection and Ranging technology. However, most data sources present acquisition errors that make the reconstruction more challenging. Among them, the main challenge is the partial or complete occlusion of branch segments, thus leading to disconnected components whether the reconstruction is resolved using graph-based approaches. In this work, we propose a hybrid method based on radius-based search and Minimum Spanning Tree for the tree branching reconstruction by handling occlusion and disconnected branches. Furthermore, we simplify previous work evaluating the similarity between ground-truth and reconstructed skeletons. Using this approach, our method is proved to be more effective than the baseline methods, regarding reconstruction results and response time. Our method yields better results on the complete explored radii interval, though the improvement is especially significant on the Ground Sampling Distance In terms of latency, an outstanding performance is achieved in comparison with the baseline method.
@article{cardenas_reconstruction_2022, title = {Reconstruction of tree branching structures from {UAV}-{LiDAR} data}, volume = {10}, issn = {2296-665X}, url = {https://www.frontiersin.org/articles/10.3389/fenvs.2022.960083}, urldate = {2023-12-31}, journal = {Frontiers in Environmental Science}, author = {Cárdenas, José L. and López, Alfonso and Ogayar, Carlos J. and Feito, Francisco R. and Jurado, Juan M.}, year = {2022}, }
2021
-
Comparison of GPU-based Methods for Handling Point Cloud OcclusionAlfonso López, Juan Manuel Jurado, Emilio José Padrón, and 2 more authorsIn CEIG 2021 - Spanish Computer Graphics Conference, Aug 2021Three-dimensional point clouds have conventionally been used along with several sources of information. This fusion can be performed by projecting the point cloud into the image plane and retrieving additional data for each point. Nevertheless, the raw projection omits the occlusion caused by foreground surfaces, thus assigning wrong information to 3D points. For large point clouds, testing the occlusion of each point from every viewpoint is a time-consuming task. Hence, we propose several algorithms implemented in GPU and based on the use of z-buffers. Given the size of nowadays point clouds, we also adapt our methodologies to commodity hardware by splitting the point cloud into several chunks. Finally, we compare their performance through the response time.
@inproceedings{lopez_comparison_2021, title = {Comparison of {GPU}-based {Methods} for {Handling} {Point} {Cloud} {Occlusion}}, isbn = {978-3-03868-160-1}, url = {https://diglib.eg.org:443/xmlui/handle/10.2312/ceig20211364}, language = {en}, urldate = {2024-01-02}, publisher = {The Eurographics Association}, author = {López, Alfonso and Jurado, Juan Manuel and Padrón, Emilio José and Ogayar, Carlos Javier and Feito, Francisco Ramón}, year = {2021}, doi = {10.2312/ceig.20211364}, booktitle = {CEIG 2021 - Spanish Computer Graphics Conference}, } -
A GPU-accelerated LiDAR Sensor for Generating Labelled DatasetsAlfonso López, Carlos Javier Ogayar Anguita, and Francisco Ramón Feito HigueruelaIn CEIG 2021 - Spanish Computer Graphics Conference, Aug 2021This paper presents a GPU-based LiDAR simulator to generate large datasets of ground-truth point clouds. LiDAR technology has significantly increased its impact on academic and industrial environments. However, some of its applications require a large amount of annotated LiDAR data. Furthermore, there exist many types of LiDAR sensors. Therefore, developing a parametric LiDAR model allows simulating a wide range of LiDAR scanning technologies and obtaining a significant number of points clouds at no cost. Beyond their intensity data, these synthetic point clouds can be classified with any level of detail.
@inproceedings{lopez_gpu-accelerated_2021, title = {A {GPU}-accelerated {LiDAR} {Sensor} for {Generating} {Labelled} {Datasets}}, isbn = {978-3-03868-160-1}, url = {https://diglib.eg.org:443/xmlui/handle/10.2312/ceig20211360}, language = {en}, urldate = {2024-01-02}, publisher = {The Eurographics Association}, author = {López, Alfonso and Anguita, Carlos Javier Ogayar and Higueruela, Francisco Ramón Feito}, year = {2021}, doi = {10.2312/ceig.20211360}, booktitle = {CEIG 2021 - Spanish Computer Graphics Conference}, } -
A framework for registering UAV-based imagery for crop-tracking in Precision AgricultureAlfonso López, Juan M. Jurado, Carlos J. Ogayar, and 1 more authorInternational Journal of Applied Earth Observation and Geoinformation, May 2021Multiple types of images provide useful information about a crop, but image fusion is still a challenge in Precision Agriculture (PA). We describe a framework which manages a multi-layer registration model of heterogeneous images obtained by an unmanned aerial vehicle (UAV) by proposing pair-to-pair steps through a registration method invariant to intensity differences, allowing us to connect different aerial images with significant differences. Correction of deformed images is treated as a first step to end up with our registration algorithms. These methods conform the base of more advanced systems that combine 2D and spatial information, therefore it represents the link of several types of images. The evaluation shows the flexibility of our framework when dealing with different requirements. Effectiveness of the Enhanced Correlation Coefficient method is proved and thus shown as a suitable method for the registration of heterogeneous images.
@article{lopez_framework_2021, title = {A framework for registering {UAV}-based imagery for crop-tracking in {Precision} {Agriculture}}, volume = {97}, issn = {1569-8432}, url = {https://www.sciencedirect.com/science/article/pii/S030324342030917X}, doi = {10.1016/j.jag.2020.102274}, urldate = {2023-12-31}, journal = {International Journal of Applied Earth Observation and Geoinformation}, author = {López, Alfonso and Jurado, Juan M. and Ogayar, Carlos J. and Feito, Francisco R.}, month = may, year = {2021}, keywords = {Image registration, Multispectral imagery, Thermal imagery, Tree recognition}, pages = {102274}, } -
An optimized approach for generating dense thermal point clouds from UAV-imageryAlfonso López, Juan M. Jurado, Carlos J. Ogayar, and 1 more authorISPRS Journal of Photogrammetry and Remote Sensing, Dec 2021Thermal infrared (TIR) images acquired from Unmanned Aircraft Vehicles (UAV) are gaining scientific interest in a wide variety of fields. However, the reconstruction of three-dimensional (3D) point clouds utilizing consumer-grade TIR images presents multiple drawbacks as a consequence of low-resolution and induced aberrations. Consequently, these problems may lead photogrammetric techniques, such as Structure from Motion (SfM), to generate poor results. This work proposes the use of RGB point clouds estimated from SfM as the input for building thermal point clouds. For that purpose, RGB and thermal imagery are registered using the Enhanced Correlation Coefficient (ECC) algorithm after removing acquisition errors, thus allowing us to project TIR images into an RGB point cloud. Furthermore, we consider several methods to provide accurate thermal values for each 3D point. First, the occlusion problem is solved through two different approaches, so that points that are not visible from a viewing angle do not erroneously receive values from foreground objects. Then, we propose a flexible method to aggregate multiple thermal values considering the dispersion from such aggregation to the image samples. Therefore, it minimizes error measurements. A naive classification algorithm is then applied to the thermal point clouds as a case study for evaluating the temperature of vegetation and ground points. As a result, our approach builds thermal point clouds with up to 798,69% more point density than results from other commercial solutions. Moreover, it minimizes the build time by using parallel computing for time-consuming tasks. Despite obtaining larger point clouds, we report up to 96,73% less processing time per 3D point.
@article{lopez_optimized_2021, title = {An optimized approach for generating dense thermal point clouds from {UAV}-imagery}, volume = {182}, issn = {0924-2716}, url = {https://www.sciencedirect.com/science/article/pii/S0924271621002604}, doi = {10.1016/j.isprsjprs.2021.09.022}, urldate = {2023-12-31}, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, author = {López, Alfonso and Jurado, Juan M. and Ogayar, Carlos J. and Feito, Francisco R.}, month = dec, year = {2021}, keywords = {GPU computing, Point cloud, Image processing, Occlusion, Thermal imagery, UAV imagery}, pages = {78--95}, }
2020
-
Simulación de escaneados 3DAlfonso López RuizJun 2020Master Dissertation. Publisher: Universidad de JaénLas nubes de puntos 3D procedentes de un sensor LiDAR se utilizan en un número considerable de aplicaciones: desde la preparación y verificación de trabajos de campo, a tareas relacionadas con la inteligencia artificial, como puede ser la conducción autónoma o el entrenamiento de sistemas robóticos. Sin embargo, su obtención representa un coste económico y temporal, y más allá de la adquisición, se observa un número muy reducido de conjuntos de datos etiquetados aplicables a tareas de machine learning y de visión por computador. Además, es común encontrar nubes de puntos cuyas etiquetas se reducen a unas pocas clases, que incluso se asignan manualmente, lo que implica que podrían existir errores en el proceso de etiquetado, más allá del limitado nivel de detalle existente. Por tanto, la simulación de un sensor LiDAR sobre un escenario 3D modelado permite obtener nubes de puntos sintéticas, correctamente etiquetadas, con clases ajustadas a un escenario concreto, y con un nivel de detalle personalizado. Por otro lado, la generación de nubes de puntos en gran cantidad puede obtenerse como consecuencia de la introducción de escenarios procedurales. El comportamiento físico del sensor hace que este problema pueda representar una elevada carga de trabajo. Por tanto, la introducción de la computación paralela puede ayudar a reducir el tiempo de respuesta del proceso de escaneo. Además, la simulación del sensor no sólo incluye una generación básica de una nube de puntos, sino también la introducción de aquellos errores más comunes vinculados a un dispositivo LiDAR, con el fin de reproducir de la manera más fiel posible su comportamiento.
@thesis{lopez_ruiz_simulacion_2020, title = {Simulación de escaneados {3D}}, url = {https://hdl.handle.net/10953.1/19941}, language = {spa}, urldate = {2025-11-26}, author = {López Ruiz, Alfonso}, month = jun, year = {2020}, note = {{Master} {Dissertation}. Publisher: Universidad de Jaén}, type = {{Master} {Dissertation}}, }
2019
-
Multispectral Registration, Undistortion and Tree Detection for Precision AgricultureAlfonso López Ruiz, Juan Manuel Jurado Rodríguez, Carlos Javier Ogayar Anguita, and 1 more authorJun 2019Multi-lens multispectral cameras allow us to record multispectral information for a whole area of terrain, even though we may only need the vegetation data. Based on the intensity of each multispectral image we can retrieve the contours of the trees that appear on the recorded terrain. However, multispectral cameras use a physically different lens for each range of wavelengths and misregistration effects could appear due to the different viewing positions. As these types of lenses are dedicated to capture larger areas of terrain, their focal distance is lower and because of this we get what is called a fisheye distortion. Therefore if we want to retrieve the shape of each tree and its multispectral data we need to process the channels so them all are representated as undistorted images under a same reference system.
@book{ruiz_multispectral_2019, title = {Multispectral {Registration}, {Undistortion} and {Tree} {Detection} for {Precision} {Agriculture}}, isbn = {978-3-03868-093-2}, url = {https://diglib.eg.org:443/xmlui/handle/10.2312/ceig20191209}, language = {en}, urldate = {2024-01-02}, publisher = {The Eurographics Association}, author = {Ruiz, Alfonso López and Rodríguez, Juan Manuel Jurado and Anguita, Carlos Javier Ogayar and Higueruela, Francisco Ramón Feito}, year = {2019}, doi = {10.2312/ceig.20191209}, booktitle = {CEIG 2019 - Spanish Computer Graphics Conference}, } -
Prototipo de control avanzado de grandes plantaciones mediante teledetecciónAlfonso López RuizJun 2019Bachelor Dissertation. Publisher: Universidad de JaénLa tecnología, y por lo tanto la agricultura de precisión, suponen un factor clave para la consecución del aumento de la producción en la agricultura de la forma más óptima posible. Se propone así en este trabajo la utilización de datos de varios sensores (multiespectrales, térmicos, etc) para obtener información útil de un cultivo, no sin antes revertir todos los errores que puedan contener las capturas realizadas, de tal manera que estas se transforman hasta alcanzar un estado que se considera óptimo para el estudio. Se definen pues una serie de algoritmos y operaciones para la consecución de objetivos tales como la extracción de árboles o la georreferenciación de puntos de una imagen. Habrá que probar la viabilidad de dichos algoritmos para su introducción en una futura aplicación de utilidad para el agricultor.
@thesis{lopez_ruiz_prototipo_2019, title = {Prototipo de control avanzado de grandes plantaciones mediante teledetección}, url = {https://hdl.handle.net/10953.1/14302}, language = {spa}, urldate = {2025-11-26}, author = {López Ruiz, Alfonso}, month = jun, year = {2019}, note = {{Bachelor} {Dissertation}. Publisher: Universidad de Jaén}, type = {{Bachelor} {Dissertation}}, }