Home


In the Laboratory of Robotics and Automation we perform and promote research in application problems that rise in the area of robotics for industrial and service applications, as well as industrial automation.

We utilize state of the art tools to expand the scientific and technological front in research areas including robotics, artificial vision, intelligent systems and pattern recognition, and we seek ways to integrate them seamlessly with other Industry 4.0 technologies.


Semantic Mapping

Semantic Mapping

The main purpose is the creation of an integrated system where it will create integrated semantic maps that will be obtained by combined processing of satellite data and images which taken in real time when the robot is moving. The proposed integrated semantic map aims to fully identify the position of the vehicle / robot in the real-world space with or without the use of the Global Positioning System (GPS), while at the same time being familiar with the interpretation of the surrounding space. Most importantly, knowing the semantic map, any vehicle / robot movement will enhance semantic information / entities on the map. In addition, localization of the robot’s position and orientation on the map by satellite images, knowing the semantic area or entity in which the robot is located, enables it to predict the next semantic region / entity that is recorded on the semantic map from satellite images. In the case that entities are not located in the primary semantic map, the robot will update it, when appearing new semantic entities . In conclusion, a new robot (autonomous vehicle) that follows the same traversed route as the original, will know a semantic enhanced map of the area, and through cloud services it will be able to contribute itself to updating the map .
Modular, Self-Configuring, Plug-and-Work Systems

Modular, Self-Configuring, Plug-and-Work Systems

Industry is a competitive area, which always seeks to adopt technological advances for its gain. Assembly lines, the heart of the manufacturing process, have attracted the interest of scientists and engineers since the era of automation. The particular interest was in optimizing the assembly line, in order to increase productivity, reduce costs and meet product delivery deadlines. Nowadays, the Industry is facing a new challenge in combining and adopting current technological trends, such as the Internet of Things (IoT), the Cyber Physical Systems (CPS), and Artificial Intelligence (AI). In this context, this research proposal examines the problem of scheduling and sequencing in manufacturing lines once more, but with a development towards the recent breakthroughs in Reinforcement Learning and decentralized Multi-Agent Systems.

Place Recognition

Place Recognition

A conditio sine qua non for any mobile robotic platform refers to its ability to navigate in an unknown area. To that end, incremental structure-from-motion techniques have been implemented in real-time formulating the problem of Simultaneous-Localization-and-Mapping (SLAM). A standard visual-SLAM architecture utilizes optical sensors as primary information mechanisms. Due to its demanding nature and highly received attention, the problem of SLAM has provoked thought for many individual challenges. One of the most important tools which improve the output of such an architecture is visual Place Recognition (vPR). In the typical case, the localization engine of a SLAM system is capable of estimating the relative transformation between few close-in-time poses, while there is not any inherited mechanism to associate the currently acquired measurements with the ones obtained in the past. Thus, in conjunction with the main SLAM functionality, vPR systems are employed in order to identify revisited regions of the executed trajectory and provide supplementary information regarding the measurements’ arrangement in the 3D space in order to improve the output.

The motivation behind our research has been the prospect to take full advantage of the above notion and construct a vPR scheme with a hierarchical information architecture in order to identify visual similarities between the received camera measurements. Thus, our methods are based on local keypoints (since they are widely used in SLAM) and their efficient Bag-of-Visual-Words (BoVW) representation, which are treated as the fundamental building blocks for the rest of our system. This mechanism is gradually extended to more complex entities capable of characterizing the total of trajectory regions using a single vector and providing geometry information to the images’ description. The developed vPR approaches are fully applicable on any mobile robotic platform, while complying with the requirements of a state-of-the-art SLAM architecture and the computational limitations of a modern low-power-consuming hardware unit.

Aerial Robotics Systems

Aerial Robotics Systems

The role of Unmanned Aerial Vehicles (UAV) in everyday activities has become more and more bold during the late years. They can be utilized in a bunch of useful sectors, which include inspection and monitoring; surveying and mapping; and precision agriculture, to name a few. In particular in this project the primary objective has been the integration of an utterly autonomous aerial rescue support system, capable of detecting, locating and rescuing humans in peril during crisis events occurrence. The UAV performs and navigates completely autonomously, guided solely by data provided by the potential victim’s wearable equipment. The system aims to provide a critical and multifaceted support in Search and Rescue (SAR) operations by significantly reducing the response time as well as backing-up first responders. System’s implementation details concerning both software hardware include an Android app for human’s in peril distress signal transmission and reception, suitable algorithms for the fully autonomous UAV’s flight, several Global Positioning System (GPS) methods for both UAV’s and distressed human’s precise positioning and finally an embedded vision system for the punctual and precise real-time human detection and the aid supply. Furthermore, in order to identify hazardous emerging behaviors in SAR missions with UAVs, a System Theoretic Process Analysis (STPA) has been applied in two different system’s operational modes. The novelty of the proposed system banks on the combination of both GPS and deep learning techniques.