Indoor Localization Systems for Unmanned Systems using Ultra Wideband technology, Inertial Sensors and Vision
Introduction: The localization and navigation of UAV in indoor environments is still an open problem. Although outdoor positioning of UAVs has greatly benefited from GPS technology, GPS-based localization remains impractical in closed environments due mainly to the high attenuation of GPS signals. Several ranging technologies are available for indoor positioning, but most of them lack especially in the accuracy for position and rate of position estimation. In our research we wanted to provide a precise and fast localization system for autonomous navigation of UAVs in indoor environments. In the first stage of the project, we analyzed several ranging techniques. Among all the analyzed technologies, we selected UWB (Ultra-Wide Band) ranging technology thanks to its high accuracy in indoor position estimation. However, we found that some limitation of the UWB are the low rate with which the position estimation is provided and the lower accuracy on vertical axis (due to reflection of the signal on the ground), compared to the horizontal plane. Because of the need to provide both precise and fast localization of unmanned systems in indoor environments, in this project we investigated the use of Ultra-WideBand technology, in conjunction with inertial sensors and vision sensors, for the development of high-accuracy and high-rate localization algorithms. We wanted to overcome the limitation in the position rate of the UWB exploiting the fast rate of inertial sensors. Based on our experimental results, we found that the integration of UWB with inertial data and vision data as additional aiding in the sensor fusion algorithm overcome the limitation and allows a centimeter-level precision position-estimation necessary to carry out the critical task of autonomous landing. We found that the integration of UWB ranging with inertial data and vision data allows a precise localization of the UAV during normal navigation and particularly during the critical task of autonomous landing. These localization algorithms based on well-known sensor fusion algorithms have been developed on real hardware (embedded computers) and successfully tested in real scenarios.
Simulation Framework for Cooperative Robotics
Introduction: The latest advancements in embedded computing, sensors and communication technologies have accelerated the development of autonomous systems and enabled also the research on cooperative robotics systems, where the capabilities are measured in term of the team rather than a single robot. Cooperating robotic systems are really complex and it becomes fundamental to simulate any mission stage exploiting the benefits of simulations like repeatability, modularity and low cost. Because of the need to have a high-realistic simulation environment specifically designed for cooperating systems and a rational process to fast model and test novel features for cooperative robotics, in this project we developed a simulation framework for design and testing control system for cooperative robotics combining a 3D simulated world based on the most accurate physical engines with the simplicity of use of designing tool such as MatLab/Simulink for developing and testing code for cooperative tasks. We started developing 3D models of each robot and provided a physical description for each one (dimension, mass, number of motors), as well as 3D model of different environments (buildings, roads, obstacles). We then developed a software interface with MatLab to link each robot to a specific control system (developed in Simulink). Thank to this interface we were able to test several control strategies with minimum effort. The proposed simulation framework allowed to successfully test control systems that can then be deployed on real hardware, with minimum effort. Several application examples, like formation control of several UAVs and mission management for cooperation between ground and aerial robots, have been proposed and successfully tested thanks to our simulation framework.
Autonomous Landing of Rotary wing UAVs using Vision, Inertial Sensors and GPS
Introduction: Autonomous takeoff and landing of UAVs requires a precise measurement of the position of the UAV with respect to the landing area. Unlike GPS that works only in outdoor environments and with an accuracy of around 5 meters, cameras can be successfully used to precisely estimate the position of the UAV. But cameras provide a large amount of data that must be processed in real-time fashion to be effective. The current state-of-art relies mainly on classical CPU-based systems and are not able to provide an estimation of the UAV position with the accuracy and velocity required during the critical task of autonomous landing. Because of the need of high control performance of the UAV especially during takeoff and landing, in this project we explored the use of parallel computing on multi-core processors typically used for graphic rendering (GPUs), in order to provide a fast and precise pose estimation of the UAV, when close to the landing area. After the analysis of the current state of art on autonomous landing, we found that none of the current solutions are able to provide a fully vision-based pose estimation on-board with high frame rate and high definition images. We therefore developed a pose estimation system based on a predefined marker and an embedded CPU/GPU board for image elaboration. The software realized for the embedded CPU/GPU uses a parallel-computing approach: instead of using a single powerful processor (CPU-approach) to manage a complex task, we use thousands of small processors each of one solves a sub-set of the complex task (GPU-approach). I found that, thanks to the high parallelism of the GPU, the developed algorithm is able to detect the landing pad and provide a pose estimation with a minimum framerate greater than 30fps (frames per second), regardless the complexity of the image to be processed, sufficient to guarantee a full control of the UAV during the autonomous landing task.
Dependable Multi Robot Cooperative Tasking in Uncertain and Dynamic Environments
Introduction: Our research is focused on the development of cooperative robot systems, composed by heterogeneous robots (ground robots and aerial robots). Our proposed design methodology is of a top-down nature and provides correct-by-construction controllers for each agent, which complements well prevailing bottom-up and trial-and-error practices in the context of robotic system design where the local interactions are usually predefined heuristically with inspirations from natural social behaviors. Furthermore, the effectiveness of the proposed framework is demonstrated by a detailed experimental study based on the implementation of a multi-robot coordination scenario. The proposed hardware-software architecture, with each robot's communication and localization capabilities, is exploited to examine the automatic supervisor synthesis with inter-robot communication.
Links to the projects: