The Laboratory of Robotics and Automation possesses a long-standing experience in collaborative projects at the European and national levels, with several renowned industrial and academic partners across Europe. Moreover, it plays a central role in promoting and advancing robotics, automation and Industry 4.0 concepts at the Greek and European levels.
European funded projects
“Novelty Or Anomaly HUNTER (NOAH)”, ESA AO8627.
“Cloud-based Simulation of Page Curling from Straight Images and Correction using Neural Networks (CURLO)”, 3rd Party to Fortissimo 2 H2020 Project.
“Methods to Refine the Self-Localization of Planetary Rovers Using Orbital Imaging, ESA NPI 289‐2013″, European Space Agency, ESA/ESTEC.
So far the only available rover localisation methods are relative, with respect to a previous rover location, and not absolute with respect to specific coordinates on the planet. This research work will investigate
methods for absolute localisation of a rover on a planetary surface by combining the stereo images obtained by the rover while traversing with high resolution images from orbit. The targeted localisation accuracy is equal to the resolution of the orbital images.
This research aims to study and develop algorithms, that combine SLAM techniques with spatial and elevation information coming from orbital images. Most previous attempts to improve the localisation utilising orbital imagery concerned robots that operate in structured urban environments, by extracting the prominent patterns of the area (e.g. edge detection on orbital images that depicted buildings and roads). This has provided favourable conditions, as urban environments offer abundance of canonical formations. Space scenes lack such canonical formations and, as a result, a different -less texture dependent methods- will be investigated. Therefore, two different approaches will be considered depending on the available orbital information:
In case only orbital images are available:
Extraction of “space specific – non structured” salient characteristics (local features, custom patterns) on the orbital images.
Extraction of corresponding characteristics that stem from ground rover’s on-board sensors.
In case only orbital digital elevation maps (DEMs) are available:
Extraction of 3D morphologically prominent formations from the DEMs.
Extraction of the corresponding formations utilising ground rover sensory for the environment reconstruction.
The algorithms will also be correlated to the computational resources they require. To this two different operational scenarios will be baselined and
investigated independently for the absolute localisation scheme:
Onboard approach: The ground station will provide periodically (e.g. daily) pre-processed data stemming from orbital images. Algorithms will be investigated that run onboard and combine this data with the local imagery collected by the rover.
On-ground approach: The refinement of the localisation will be done on-ground using selected stereo imagery downloaded by the rover on each communication window.
“Autonomous Vehicle Emergency Recovery Tool (AVERT), FP7-SEC-2011-1-285092″, European Commission – Information & Communication Technologies (ICT).
“Sparing Robotics Technologies for Autonomous Navigation (SPARTAN), E913-00MM”, European Space Agency, ESA/ESTEC.
1) to identify from orbit interesting scientific and landing sites,
2) to explore/search for water on the ground, and
3) to investigate about possible human habitability conditions. Both on-orbit and surface missions have achieved remarkable results. While multiple and valuable investigations can be made at the surface of Mars, there is a clear consensus within the scientific community that the major scientific objectives of Martian exploration can only be achieved with the return of a sample to Earth. Bringing Martian samples back to Earth would have the clear consequence of allowing intensive, different and detailed analysis of the collected Mars samples, even years after the return of the sample. The MSR scenario, as discussed at international level between NASA, ESA, CSA and JAXA within the iMARS would include two flight elements: an Orbiter and a Lander. The Orbiter and the Lander, launched separately to Mars, would work together to return at least a single Mars sample container back to Earth. After entering the Martian atmosphere, the Lander platform, featuring both a Sample Fetching Rover (SFR) and Mars Ascent Vehicle (MAV), would perform a soft landing on the Martian surface. The SFR will collect samples from the surface/subsurface, or pick up cached samples from a previous mission and return those back to the MAV. In both scenarios, emphasis is given to a reasonable mobility of such a rover, which must be at least in the range of future precision landing ellipse dimensions (< 10km) in the case of the SFR collecting cached samples or even up to 20 km in the scenario where the SFR will have to do the sampling. In line with the above reported consideration and the requirement posed by ESA, the objectives of the SPARTAN activity are:
1) The reduction of the overall budgets required by the SFR navigation function while improving on its performances (i.e. accuracy of terrain reconstruction, probability to find paths) so to make the system compatible with the requirements of a long traverse range capability device.
2) The implementation of the developed computer vision algorithms (3D Reconstruction, Visual Odometry, and Visual SLAM) for rover navigation, using custom-designed vectorial processing (by means of FPGAs).
“Innovative and Novel First Responders Applications (INFRA), FP7-ICT-SEC-2007-1-225272″, European Commission – Information & Communication Technologies (ICT).
A. Create an open, standards based interoperability layer that will allow:
• Broadband access for high bandwidth applications (i.e. live video)
• Autonomous wireless broadband in underground tunnels and concrete buildings – a severe problem in CI sites such as Subway tunnels, targeted by terrorists.
• Full voice and data communication interoperability between all FR teams, their command posts and the CI site control centre
• Full interoperability of FR applications in use by the FR teams
B. Provide practical and useful novel applications for FR teams, all integrated within the open interoperability layer:
• Thermal imaging applications
• Video annotation
• Advanced fibre optic sensors
• Indoor navigation system
Both the communications interoperability layer and the FR applications in INFRA are novel and go well beyond the current state of the art for the technology currently in use by FR teams. Although the FR forces are quite fragmented and localised, achieving standardization on the issue of broadband applications for FR is of importance to all Europe, as it will allow significant cost reduction of FR equipment and cross region and cross border cooperation between FR units. In a similar manner, there is no standardization of CI sites. So FR teams cannot rely on a standardised environment that is common to all CI sites. This situation is quite typical in Europe and globally. Project INFRA will provide a major step towards a standard, seamless, effective and efficient FR environment, which will ensure interoperability with the CI control centre, will save lives and reduce the financial damages of catastrophic events in CI sites.
“Autonomous Collaborative Robots to Swing and Work in Everyday EnviRonment (ACROBOTER), FP6-IST-2006-045530″, European Commission – Information Society Technologies (IST).
The project aims to develop a radically new robot locomotion technology that can effectively be used in home and/or in office environments for manipulating small objects autonomously or in close cooperation with humans. This new type of mobile robot will be designed to move fast and in any 3D direction in an interior environment. The main challenge is to easily (and like never before) overcome any kind of obstacles such as stairs, doorsteps, chairs, tables, shelves, the edge of carpets and various other everyday objects that can be found in a room in a generalized way. Also, the workspace of the robot will be extended (compared to recently available service robots) in the vertical direction. For example, the robot may have to operate on the top of tables, wardrobes and it may be used for manipulating objects placed on shelves, but also on the floor.
“Vision and Chemiresistor Equipped Web-connected Finding Robots (VIEW-FINDER), FP6-IST-2006-045541″, European Commission – Information Society Technologies (IST).
In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced base station. The robots will be equipped with a wide array of chemical sensors, on-board TV/IR cameras, LADAR and other sensors to enhance scene understanding and reconstruction. At the base station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). Beside the task specific sensors above, conventional sensors will be used to support navigation. The robots will be designed to navigate individually or cooperatively and to follow high-level instructions from the base station. The robots are off-the-shelf units, consisting of wheeled robots for the common fire ground and robotic caterpillars for specialised situations. The robots connect wirelessly to the base station and to each other; using a wireless selforganising network of mobile communication nodes (that is made up of other robots acting as communication routers and bridges) which adapts to the terrain. The robots are intended as the first explorers of the area, as well as in-situ supporters to act as safeguards to human personnel. The base station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command. The project aims to provide a proof-of-concept solutions, to be evaluated by a board of expert end-users that can verify that operational needs are addressed. Project workshops will be organised with the aim of further disseminating, and exploitating all results.
Summary of objectives:
1. Inspection of fire or crisis grounds and chemicals and toxin detection
2. Map building and scene reconstruction
3. Interfacing and fusing local command information and external information sources
4. Human Interface, integrating information search and robot control
5. Autonomous robot navigation and multi robot cooperation
6. Human-Robot cooperation and interaction.
“Improvement of the Emergency Risk Management through Secure Mobile Mechatronic Support to Bomb Disposal (RESCUER), FP6-IST-511492″, European Commission – Information Society Technologies (IST).
RESCUER is an intelligent mechatronic system capable of achieving given goals under conditions of uncertainty. In contrast to existing automated bomb disarming systems, which are, by definition, pre-programmed to deliver given behaviour and are therefore predictable, RESCUER may arrive at specified goals in an unpredictable manner. This is possible due to RESCUER’s improved flexibility, dexterity and intelligence comparable to a human rescue specialist. Flexibility means the capability of responding to frequent changes in the environment without being re-configured. Dexterity means the enhanced perception and manipulation capabilities have never been used in Explosive, Chemical or Biological Threat Disposal or in humanitarian rescue operation while intelligence means the ability of RESCUER to identify the risk and to decide on proper action.
RESCUER is endowed with flexibility, which means it is capable of responding to frequent changes in the environment. This qualitative difference in RESCUER behaviour from exiting systems is the result of the separation of the domain knowledge from the mechanism dedicated for problem solving.
National funded projects
“Wearable systems for the safety and wellbeing applied in security guards” (SafeIT), Τ2ΕΔΚ-01862”, ΕΥΔΕ-ΕΤΑΚ.
The goal of the project is to develop systems and applications that will receive and process data from smart wearable to record biometric data for the purposes of health and vital safety monitoring, positioning and presence / commencement / ending of the shift, direct notification and other required services, such as indications of danger, attack, etc. These devices will automatically wirelessly connect with additional equipment (ambient sensors and positional beacons) and their data will be combined with the cameras data. Systems of Augmented / Mixed reality will also be developed in addition to simply record and track signals ,more effectively manage the data from incident management center operators and supporting decision-making by the end user (guardian).
Due to the variety of data to be transmitted and stored, a combination of modern signal processing and machine learning techniques is needed through the creation of a knowledge base. The latter will be shaped by the data collected per event. There will also be use of pioneering security techniques which combine multi-parametric authentication, such as the use of biometric elements (something the user is) with equipment components (something the user has), which will be physically resistant to cloning, that is to say they will exhibit structural features that will make them unique. Some of the most important issues that will be addressed in the project are the data transmission security and the use of blockchain technology for secure data storage (protection against unauthorized data alteration, PUF assisted encryption), as well as the use of security technologies to protect personal data (use of nicknames and aliases, PUF reciprocal authentication). In the strand of augmented reality, the technological innovation that the project introduces is multidimensional. More specifically, this project is the first national effort to develop an integrated augmented/mixed reality platform that acts on both the field and the control center. The development of AR interfaces for managing information in the field of security services is an innovation of the project. Also, the development of ergonomic interfaces based on the geometric representation of the area of interest combined with the use of maps in an augmented reality environment is another innovation the project aspires to introduce. Finally, all applications that will be deployed will respect the privacy of users by complying to the requirements of innovative and academically-recognized methodologies, while ensuring all the technical and organizational requirements set by the new General Data Protection Regulation 2016 / 679 (GCC-GDPR) are met.
“Strategies and zero-defect technologies for the effective implementation of autonomous quality control procedures at dairy food industries” (Flawless), Τ2ΕΔΚ-01658”, ΕΥΔΕ-ΕΤΑΚ.
At the core of FLAWLESS there is the development of stand-alone solutions and strategies of the prediction, prognosis, identification and error correction, which, combined with smart sensing networks for monitoring and extraction of quality data, will implement self-contained quality control and zero-defect manufacturing applications. Through these solutions, the main goal is to increase the efficiency and sustainability of high-quality product manufacturing processes by investing in knowledge-intensive technologies and zero-defect strategies as an important pillar of optimization and support for flexible production lines.
The technological objectives of the project include:
1) Study and implementation od smart conventional and optical measurement sensors (use of mechanical vision and image processing techniques) for the acquisition of quality data from critical locations and production equipment, and the identification of defects and defective products.
2) Develop an Artificial Intelligence system based on Machine Learning for the fault prediction, which will feed the decision-making system and will be connected with both field level communication systems and higher-level fault inspection and management systems.
3) Develop an effective Decision Support System (DSS) to improve operations and maintenance, coupled with reliable and timely updates to prevent and propagate defects and faults.
4) Development of data driven system models based on resource, energy and materials consumption combined with advanced optimization techniques to minimize defective products and long-term operating costs while maintaining high quality products.
5) Installation, demonstration and evaluation of the integrated FLAWLESS system in the operational environment of two dairy industries (ELGAL and KOUKAKIS).
FLAWLESS aspires to become a prototype project trough the synergy of cutting-edge technology and research efforts with the needs of the real economy. The main impacts of the project are to increase the competitiveness of the involved industries, increase operating efficiency due to reduced equipment failure rates and downtime due to repair, unplanned downtime of the production line and mainly due to the minimization of out-of-specification products.
Additional Benefits to Dairy Industries include:
* Continuous update about the condition of the equipment at the stages of processing (sterilization, homogenization, etc.), packaging and cooling to meet the quality profile.
* Minimize production of out of specifications products.
* Minimize equipment faults and their propagation on the production lines.
* Delivery of the products in the required time and quantity as requested by the customer orders.
* Improve production performance by 20% while reducing product cost.
* Reduction of total production costs by 15%.
Finally, FLAWLESS will contribute to the market by creating 48 new jobs at a depth of 6 years.
“Autonomous robotic unmanned aerial vehicle system for navigation in inaccessible interior spaces and human detaction” (MIDRES), Τ2ΕΔΚ-00592”, ΕΥΔΕ-ΕΤΑΚ.
• The development of the UAV, mainly targeting its miniature size and capability of immediate intervention
• The UAV platform selection after a suitability study amongst candidate configurations (fixed wing, helicopter, multicopter, flapping and a combination of the above) so that they meet the mission requirements
• The real time footage transmission from the optical and thermal cameras, being carried by the UAV, and the composition of the footage through Artificial Neural Network algorithms for the location and identification of people
• The development of Robotic Vision software, using the cameras on the UAV, for the digital 3D reconstruction of its field of navigation and environment perception in real time
• The control of the UAS based on a hierarchically structured modular architecture, consisting of flight/navigation control systems and data collection/process systems. An appropriate combination of the navigation controller with the 3D space reconstruction will offer the UAV the ability to navigate in GPS denied environments
• Compatibility of the UAS with the principles of the developing regulatory framework of EASA
• Conforming to EU regulations for personal data protection (GDPR)
• Design of the UAS to be a commercially competitive product
The ability of a UAV to operate in internal environments is desired for multiple missions, especially search and rescue. Such missions call for the efficient location of people in closed spaces, thus posing an essential design goal for the proposed UAS. The design of the UAV will be conducted using a methodology based on the three phases of aeronautical design, employing tools of computational fluid dynamics, structural analysis, power management and experiments. The control system will be consisted of the levels of navigational control and flight control. The environment perception and vision system will allow for 3D space awareness and autonomous navigation ability without using GPS, as well as the identification of people through Artificial Neural Network software. The UAV’s control will be facilitated through a PCS, through which the operator will have real time control, by an FPV system.
The corporations will benefit from gaining expertise in fields of their research and commercial interests as well as new technological fields, broadening their business horizons and enhancing their presence in the market of UASs. AUTH and DUTH will primarily benefit from developing and transferring expertise to engineering students, about microscopic autonomous UAVs, robotic vision, 3D space reconstruction and control algorithms. Apart from services it can offer to rescue missions, the system can potentially operate in applications of industrial facility monitoring, remote surveillance and internal space mapping.
The project and developing product’s promotion will consist of publicity and diffusion actions in scientific journals, national and international exhibitions. The project will be executed by institutions which have successfully collaborated in the past (AUTH-DUTH), an SA (ALTUS LSA) trading in UASs and the Greek Ministry of Defense.
“Traditional Musical Instruments Room” (TraMIR), Τ2ΕΔΚ-04800”, ΕΥΔΕ-ΕΤΑΚ.
The proposed project will create a “Music Room” for discovering, getting in touch, learning and interacting with the Greek folk music instruments which will be realized in a 3D version. The visitor of this room will be able to hold the music instruments while wearing virtual reality(VR) glasses and discover the geographical areas these instruments represent through a digital map. She will receive information on the music instruments, their performance combinations (two or more instruments playing together) and their musics. Due to quizzes she will have an enhanced interactive and gaming experience. While Greek traditional musics are an integral part of Greek traditional dances, we propose an experimental application of motion detection in order to give the visitor the chance to try the steps of some traditional dances connected to the instruments showcased. Motion detection will be realized through contemporary deep learning techniques and neural networks. Eventually, the platform will give the visitor the chance to get introduced to traditional musics through a live, fresh and evolving experience, living behind the usual static and old fashioned experiences connected with Greek folk music. Folk and History Museum of Xanthi-Progressive Association of Xanthi will be the host of the platform to perform its realization. As a cultural organization with a strong background in managing and promoting traditional culture, community and education, it boosts the project’s intention to approach Greek traditional musics with a modern perspective. The suggested platform will stand as an important tool for promoting cultural heritage, cultural education for a wide age range of audiences, more efficient teaching of the music course at schools and enhancement of the tourist experience.
“Combination of conventional and machine vision sensing and failure mode prediction models, for optimal risk management and increased operating life of production assets, in the Factory of the Future, (PREDICT), Τ1ΕΔΚ-02433”, ΕΥΔΕ-ΕΤΑΚ.
1) Interfacing with conventional sensors and ultra speed cameras to collect and process production equipment data from the field, in order to feed real-time prediction and failure detection models.
2) Design of machine learning models that accurately predict the failure timeline and the estimated remaining equipment life, detecting current or evolving failures.
3) Risk and failure management according to IEC60812 standard, by analyzing their occurrence mechanism and determining their criticality and impact.
4) Automated decision support (DSS) to assess equipment performance and accurately predict and diagnose failures and fatigue. It will be combined with innovative Strategies to PREDICT, DIAGNOSE, REVENT, MANAGE, REMEDIATE and SYNCHRONIZE.
5) Interfacing with Enterprise Resource Planning (ERP) and Manufacturing Execution Systems (MES) for optimal synchronization of maintenance work with production requirements and planning.
6) Evaluation (a) of effectiveness and reliability, (b) user acceptance and (c) impact of the integrated PREDICT system in the business environment of 2 industries: Loulis Mills (the largest grinding company in the Balkans, listed in Stock Market) and KEBE (the largest and most modern ceramics factory in Europe).
7) Creation of a Business Plan and a plan for International Commercialization, and design and implementation of strategies for managing the produced innovation.
8) Actions to support and boost the produced innovation, including preparation for the submission of at least one international patent.
9) Dissemination and communication of PREDICT results to the international scientific and business community. Main expected impact from the PREDICT operation in the Industry, and (and wider impact in the Greek and the European economy), include: i) 50% reduction in equipment downtime, ii) 10% failure reduction, iii) 55% cost reduction resulted from equipment breakdowns, iv) 20% reduction in maintenance cost and v) 24,6% reduction in production cost. With regard to commercial prospects, the PREDICT consortium targets the Greek and European market of EAM (Enterprise Asset Management), an estimated market size of €580 million. The initial 4-years plan (after the end of the project), estimates a cumulative sales volume of €29,27 million, cumulative profits of €11,17 million and RoI (Return on Investment) 9,77.
“Multirole portable UAS (MPU), Τ1ΕΔΚ-00737”, ΕΥΔΕ-ΕΤΑΚ.
• The UAV will support hybrid takeoff, that is it will cruise and loiter as a fixed wing aerial vehicle, but also be able to takeoff and land vertically (VTOL).
• The UAV will be an open architecture platform, allowing for the integration of multiple types of payload equipment, the installation of different lightweight optical sensors (cameras) as well as live-streaming equipment. • The UAS will constitute of a lightweight UAV and a PGS, based on a tablet that will serve as the system-vehicle interface.
• The UAS control system will combine state-of-the-art flight control and mission planning systems, as well as data collection and processing and perception of environment (robotic vision) algorithms and hardware.
• The UAS will be developed according to EASA regulations, ensuring its airworthiness
• The UAS will utilize the GNSS GALILEO European system.
• The UAS development will target to a market-competing product. When it comes to small/medium scale operations, the UASs have proved their worth, due to their multiple advantages and low-cost operations in various missions, such as aerial mapping and photography, crop monitoring etc. Furthermore, modern UAVs play an important part in humanitarian and landmark mapping/digital recreation missions. In order to design the aerial platform (UAV), a combined sizing, computational and experimental methodology will be employed, based on the three phases of aircraft design. The control system will include both flight control and mission planning control levels. The resulting UAV and PGS system will be fully compliable to the GNSS GALILEO standards and communication protocols. A robotic vision and perception of environment system will provide altitude control, obstacle detection and landing terrain analysis, thus allowing for a completely automated operation. The UAS will be developed to carry out mainly photogrammetry, crop monitoring, and search and rescue missions. Its performance at the corresponding flight profiles will be evaluated during the flight tests, which will also include a demo photogrammetry application. The project will be 36 months long, and will be split in 7 Work Packages with 17 Deliverables. The companies will gain valuable know-how in both their research and market areas, and will expand their technological background, and will strengthen their position in the UAS market, as well as the GNSS GALILEO technology. AUTH and DUTH have the chance to transfer and exchange know-how in the fields of lightweight UASs, robotic vision and flight control algorithms at Students of their Engineering Departments. Being the final and main Deliverable, the UAS will support various fields of application at an easy-to-use and fast deployable approach. The above, combined with the integration of the novel GNSS GALILEO system, set a promising ground for an immediate success in both the Greek and international markets. It should also be noted that the UAS will have the potential to aid in emergency situations that are directly linked with the society needs. Dissemination actions and marketing campaigns will be carried out in order to present the under-development-product to the society and market, through conference and journal papers, and national and international exposition events. The consortium is made up from partners that have a history of previous successes (AUTH-DUTH-MLS), and a well-established position in portable UASs (GEOSENSE).
“Hellenic Civil Unmanned Air Vehicle – HCUAV”, Φορέας χρηματοδότησης ΓΓΕΤ , ΣΥΝΕΡΓΑΣΙΑ, ΕΥΔE-ΕΤΑΚ 11ΣΥΝ9 629.
• Broad area surveillance, on a 24h/7d basis patrol over segments of National borders.
• Forest regions surveillance, on a 24h/7d basis patrol operation.
Furthermore, the adopted design requirements will aim at extending the use of the UAV towards atmospheric data collection for cloud formation, aerosol, pollution/air quality measurements and weather forecast initialization data.
“Ανάπτυξη και Υλοποίηση Νέων Αλγόριθμων Αναγνώρισης Προτύπων Βασισμένων σε Βιολογικά Εμπνευσμένα Μοντέλα και σε Ευφυή Συστήματα”, Φορέας χρηματοδότησης ΓΓΕΤ , ΠΕΝΕΔ.
1. Αυτόματη συναρμογή των τμημάτων των εφημερίδων, οι οποίες κατατμήθηκαν για να χωρούν στο σαρωτή, ώστε να είναι δυνατή η ψηφιοποίησή τους.
2. Αυτοματοποίηση της τμηματοποίησης της εικόνας, ώστε να διευκολυνθεί η διαδικασία της αναγνώρισης χαρακτήρων. Η αυτοματοποίηση αυτή περιλαμβάνει επιλογή καταλλήλων κατωφλίων, φίλτρων για απομάκρυνση θορύβου και τονισμό χαρακτηριστικών κλπ.
3. Εφαρμογή γρήγορων σθεναρών αλγορίθμων στην αναγνώριση χαρακτήρων.
4. Αναγνώριση των λέξεων κλειδιών σε ένα κείμενο, στο οποίο έχει γίνει οπτική αναγνώριση χαρακτήρων, ώστε να είναι δυνατή η δεικτοθέτισή του.
“Development of new techniques for recognition and categorization”, Greece-Slovenia, Joint Research and Technology Programmes (PI from the Slovenian side Prof. Ales Leonardis).
Throughout the years, we have engaged in collaborative research partnerships with government agencies and leading academic entities. These partnerships benefit from shared goals and resources and have resulted in significant technological advances and innovative new solutions.