In the Laboratory of Robotics and Automation we perform and promote research in application problems that rise in the area of robotics for industrial and service applications, as well as industrial automation.
We utilize state of the art tools to expand the scientific and technological front in research areas including robotics, artificial vision, intelligent systems and pattern recognition, and we seek ways to integrate them seamlessly with other Industry 4.0 technologies.
-
Semantic Mapping
-
Modular, Self-Configuring, Plug-and-Work Systems
-
Place Recognition
-
Aerial Robotics Systems
-
Cyber-physical production systems

Semantic Mapping

Modular, Self-Configuring, Plug-and-Work Systems
Industry is a competitive area, which always seeks to adopt technological advances for its gain. Assembly lines, the heart of the manufacturing process, have attracted the interest of scientists and engineers since the era of automation. The particular interest was in optimizing the assembly line, in order to increase productivity, reduce costs and meet product delivery deadlines. Nowadays, the Industry is facing a new challenge in combining and adopting current technological trends, such as the Internet of Things (IoT), the Cyber Physical Systems (CPS), and Artificial Intelligence (AI). In this context, this research proposal examines the problem of scheduling and sequencing in manufacturing lines once more, but with a development towards the recent breakthroughs in Reinforcement Learning and decentralized Multi-Agent Systems.

Place Recognition
A conditio sine qua non for any mobile robotic platform refers to its ability to navigate in an unknown area. To that end, incremental structure-from-motion techniques have been implemented in real-time formulating the problem of Simultaneous-Localization-and-Mapping (SLAM). A standard visual-SLAM architecture utilizes optical sensors as primary information mechanisms. Due to its demanding nature and highly received attention, the problem of SLAM has provoked thought for many individual challenges. One of the most important tools which improve the output of such an architecture is visual Place Recognition (vPR). In the typical case, the localization engine of a SLAM system is capable of estimating the relative transformation between few close-in-time poses, while there is not any inherited mechanism to associate the currently acquired measurements with the ones obtained in the past. Thus, in conjunction with the main SLAM functionality, vPR systems are employed in order to identify revisited regions of the executed trajectory and provide supplementary information regarding the measurements’ arrangement in the 3D space in order to improve the output.
The motivation behind our research has been the prospect to take full advantage of the above notion and construct a vPR scheme with a hierarchical information architecture in order to identify visual similarities between the received camera measurements. Thus, our methods are based on local keypoints (since they are widely used in SLAM) and their efficient Bag-of-Visual-Words (BoVW) representation, which are treated as the fundamental building blocks for the rest of our system. This mechanism is gradually extended to more complex entities capable of characterizing the total of trajectory regions using a single vector and providing geometry information to the images’ description. The developed vPR approaches are fully applicable on any mobile robotic platform, while complying with the requirements of a state-of-the-art SLAM architecture and the computational limitations of a modern low-power-consuming hardware unit.

Aerial Robotics Systems
The role of Unmanned Aerial Vehicles (UAV) in everyday activities has become more and more bold during the late years. They can be utilized in a bunch of useful sectors, which include inspection and monitoring; surveying and mapping; and precision agriculture, to name a few. In particular in this project the primary objective has been the integration of an utterly autonomous aerial rescue support system, capable of detecting, locating and rescuing humans in peril during crisis events occurrence. The UAV performs and navigates completely autonomously, guided solely by data provided by the potential victim’s wearable equipment. The system aims to provide a critical and multifaceted support in Search and Rescue (SAR) operations by significantly reducing the response time as well as backing-up first responders. System’s implementation details concerning both software hardware include an Android app for human’s in peril distress signal transmission and reception, suitable algorithms for the fully autonomous UAV’s flight, several Global Positioning System (GPS) methods for both UAV’s and distressed human’s precise positioning and finally an embedded vision system for the punctual and precise real-time human detection and the aid supply. Furthermore, in order to identify hazardous emerging behaviors in SAR missions with UAVs, a System Theoretic Process Analysis (STPA) has been applied in two different system’s operational modes. The novelty of the proposed system banks on the combination of both GPS and deep learning techniques.

Cyber-physical production systems
The laboratory is also focused on the development of Industry 4.0 solutions. Its high-educated members use various technologies related to the 4th industrial revolution e.g., big data, augmented reality, machine learning, machine vision, additive manufacturing techniques or cloud computing, aiming to build cyber-physical systems able to increase the industry’s profitability. Furthermore, among the laboratory’s goals included the role of catalyst in the application of innovative systems across the industry, which would help the world-wide production cost-cutting avoidance. This is happening through the assessment of its technology maturity level, while preparing their equipment to be harmonized with the Industry 4.0 technologies.