| Theme | Factory planning, Artificial Intelligence, Industry 4.0, Automation |
|---|---|
| Project title | Autonomous drone flight in a production environment for logistical Process support (Autodrohne in der Produktion) |
| Project duration | 01.10.2020 – 30.09.2022 |
| Download | |
| Press release | |
| Podcast |
The goal of the research project was the development of an Unmanned Aircraft System (UAS) that autonomously flies through factories while recording data for 3D factory layouts. In the future, this so-called "autodrone" (autonomous drone) should make it possible to carry out data acquisition in the context of factory planning processes much faster and with less personnel expenditure. A drone-based automated 3D layout acquisition was already successfully implemented in the research project Instant Factory Maps, but on the basis of manual drone control.
An autonomous UAS flight in an unknown, changing environment – even more so in the closed rooms of a factory – was not possible until now. In the research project, the prerequisites for this were created.
On the one hand, a security concept was developed to enable the indoor use of UAS. On the other hand, a demonstrator was developed in the research project. To this end, suitable hardware components were selected – driving motors, a bearing frame, a processing unit and sensors for environment detection – as well as AI algorithms for route planning and navigation were developed with which the UAS can move collision-free through unknown, dynamic environments. The AI algorithms were first tested and optimized in a simulation environment and then assembled with the hardware to form a demonstrator.
With the autodrone demonstrator, the scientists experimentally validated autonomous route planning and navigation.
- Sorry, no events available.
Vergangene Termine gefunden
- 21.01.2021
- Webmeeting
- 15.07.2021
- Webmeeting
- 17.11.2021
- Webmeeting
- February 2022
- Webmeeting
- March 2022
- Webmeeting
Publications about the project
The automated indoor exploration and mapping process constitutes a key method for the digitalization of environmental structures, particularly in the context of factory planning projects. In the present work, four requirements are derived from this use case. A review of the state of the art reveals that existing approaches only partially satisfy these requirements and address the sub-tasks separately. To close the resulting research gap, an integrated reinforcement learning algorithm is applied and evaluated within a purposebuilt simulation environment. The virtual indoor environment models an UAS, a complex indoor structure, and multiple agents. Four experimental configurations vary the inclination of the environment sensor and the architecture of the artificial neural networks (CNN vs. MLP). Evaluation is conducted via convergence analysis and task-specific performance metrics for agent success. Initially, the most successful agent in each configuration is identified and subjected to test experiments; subsequently, results are compared across configurations. The experiments demonstrate that the employed algorithm, under at least one configuration, successfully masters both sub-tasks in an integrated solution. An inclined sensor orientation negatively impacts learning performance, whereas CNN architectures significantly enhance results compared to MLP networks. Thus, the research question is answered affirmatively, and the feasibility of a navigation solution meeting the specified requirements is demonstrated.
automated indoor exploration, mapping, factory planning, reinforcement learning
Although factory planning is widely recognized as a way to significantly enhance manufacturing productivity, the associated costs in terms of time and money can be prohibitive. In this paper, we present a solution to this challenge through the development of a Software-in-the-loop (SITL) framework that leverages an Unmanned Aircraft System (UAS) in an autonomous capacity. The framework incorporates simulated sensors, a UAS, and a virtual factory environment. Moreover, we propose a Deep Reinforcement Learning (DRL) agent that is capable of collision avoidance and exploration using the Dueling Double Deep Q-Network (3DQN) with prioritized experience replay.
Artificial Intelligence, reinforcement learning, Unmanned Aircraft Systems
Factory planning can increase the productivity of manufacturing significantly, though the process is expensive when it comes to cost and time. In this paper, we propose an Unmanned Aerial Vehicle (UAV) framework that accelerates this process and decreases the costs. The framework consists of a UAV that is equipped with an IMU, a camera and a LiDAR sensor in order to navigate and explore unknown indoor environments. Thus, it is independent of GNSS and solely uses on-board sensors. The acquired data should enable a DRL agent to perform autonomous decision making, applying a reinforcement learning approach. We propose a simulation of this framework including several training and testing environments, that should be used for developing a DRL agent.
drone, UAS, deep reinforcement learning
In Germany, demand for commercial drones is forecast to increase by 200% by 2025. As the use of drones increases, so does the danger they pose. This article describes a research project that aims to develop an acoustic operational monitoring system to improve the safety of critical components.
UAS, drones, operational monitoring
Unmanned aerial systems have changed the industry dramatically. The rapidly advancing technological development of so-called Unmanned Aircraft Systems (UAS) makes it necessary to address the design of future operational scenarios at an early stage.
UAS, Drones, Navigation