EU Funded Projects
- “Methods to Refine the Self-Localization of Planetary Rovers Using Orbital Imaging, ESA NPI 289‐2013″, European Space Agency, ESA/ESTEC.
- “Autonomous Vehicle Emergency Recovery Tool (AVERT), FP7-SEC-2011-1-285092″, European Commission – Information & Communication Technologies (ICT).
- “Sparing Robotics Technologies for Autonomous Navigation (SPARTAN), E913-00MM”, European Space Agency, ESA/ESTEC.
- “Innovative and Novel First Responders Applications (INFRA), FP7-ICT-SEC-2007-1-225272″, European Commission – Information & Communication Technologies (ICT).
- “Autonomous Collaborative Robots to Swing and Work in Everyday EnviRonment (ACROBOTER), FP6-IST-2006-045530″, European Commission – Information Society Technologies (IST).
- “Vision and Chemiresistor Equipped Web-connected Finding Robots (VIEW-FINDER), FP6-IST-2006-045541″, European Commission – Information Society Technologies (IST).
- “Improvement of the Emergency Risk Management through Secure Mobile Mechatronic Support to Bomb Disposal (RESCUER), FP6-IST-511492″, European Commission – Information Society Technologies (IST).
Within the international Mars Sample Return mission, a rover will be tasked to fetch a cache of soil samples left behind by an earlier mission. Following the ExoMars mission, ESA has identified this particular rover as a possible European contribution. The rover should travel for about 10km in a rather short time and in order to attain this goal, the rover should have the capability to localise itself on the global scale. The proposed research subject deals a possible implementation of this capability. In particular for Mars, the HIRISE instrument, onboard the NASA Mars Reconnaissance Orbiter, has provided high-resolution imagery (~1m resolution) for a significant part of the planet. Moreover, successful outcome of this research activity can have a direct terrestrial applications, as it will offer an approach (GPS independent) for real-time absolute localization in non-urban, unstructured areas
So far the only available rover localisation methods are relative, with respect to a previous rover location, and not absolute with respect to specific coordinates on the planet. This research work will investigate
methods for absolute localisation of a rover on a planetary surface by combining the stereo images obtained by the rover while traversing with high resolution images from orbit. The targeted localisation accuracy is equal to the resolution of the orbital images.
This research aims to study and develop algorithms, that combine SLAM techniques with spatial and elevation information coming from orbital images. Most previous attempts to improve the localisation utilising orbital imagery concerned robots that operate in structured urban environments, by extracting the prominent patterns of the area (e.g. edge detection on orbital images that depicted buildings and roads). This has provided favourable conditions, as urban environments offer abundance of canonical formations. Space scenes lack such canonical formations and, as a result, a different -less texture dependent methods- will be investigated. Therefore, two different approaches will be considered depending on the available orbital information:
In case only orbital images are available:
Extraction of “space specific – non structured” salient characteristics (local features, custom patterns) on the orbital images.
Extraction of corresponding characteristics that stem from ground rover’s on-board sensors.
In case only orbital digital elevation maps (DEMs) are available:
Extraction of 3D morphologically prominent formations from the DEMs.
Extraction of the corresponding formations utilising ground rover sensory for the environment reconstruction.
The algorithms will also be correlated to the computational resources they require. To this two different operational scenarios will be baselined and
investigated independently for the absolute localisation scheme:
Onboard approach: The ground station will provide periodically (e.g. daily) pre-processed data stemming from orbital images. Algorithms will be investigated that run onboard and combine this data with the local imagery collected by the rover.
On-ground approach: The refinement of the localisation will be done on-ground using selected stereo imagery downloaded by the rover on each communication window.
In Europe, terrorism threatens horrific loss of life, extensive disruption to city transport and damage to
commercial real estate. Vehicles provide an ideal delivery mechanism because they can be meticulously
prepared well in advance of deployment and then brought in to the Area of Operations. Furthermore, a real and
present danger comes from the threat of Chemical, Radiological, Biological and Nuclear (CRBN) contamination.
Current methods of bomb disruption and neutralisation are hindered in the event that the device is shielded,
blocked or for whatever reason cannot be accessed for examination.
The Autonomous Vehicle Emergency Recovery Tool (AVERT) shall provide a unique capability to Police and
Armed Services to rapidly deploy, extract and remove both blocking and suspect vehicles from vulnerable
positions such as enclosed infrastructure spaces, tunnels, low bridges as well as under-building and
underground car parks. Vehicles can be removed from confined spaces with delicate handling, swiftly and in
any direction to a safer disposal point to reduce or eliminate collateral damage to infrastructure and personnel.
AVERT shall be commanded remotely and shall operate autonomously under its own power and sensor
awareness, as a critical tool alongside existing technologies, thereby enhancing bomb disposal response speed
[abstract] [esa url] [brochure]
The exploration of Mars is one of the main goals for both NASA and ESA, as confirmed by past and recent activities as well as future plans. The fact that Mars is more accessible and most Earth-like planet of our solar system makes the red planet the space exploration favourite target. The last 15 years have set the sequence for the exploration of Mars as follows: 1) to identify from orbit interesting scientific and landing sites, 2) to explore/search for water on the ground, and 3) to investigate about possible human habitability conditions. Both on-orbit and surface missions have achieved remarkable results. While multiple and valuable investigations can be made at the surface of Mars, there is a clear consensus within the scientific community that the major scientific objectives of Martian exploration can only be achieved with the return of a sample to Earth. Bringing Martian samples back to Earth would have the clear consequence of allowing intensive, different and detailed analysis of the collected Mars samples, even years after the return of the sample. The MSR scenario, as discussed at international level between NASA, ESA, CSA and JAXA within the iMARS would include two flight elements: an Orbiter and a Lander. The Orbiter and the Lander, launched separately to Mars, would work together to return at least a single Mars sample container back to Earth. After entering the Martian atmosphere, the Lander platform, featuring both a Sample Fetching Rover (SFR) and Mars Ascent Vehicle (MAV), would perform a soft landing on the Martian surface. The SFR will collect samples from the surface/subsurface, or pick up cached samples from a previous mission and return those back to the MAV. In both scenarios, emphasis is given to a reasonable mobility of such a rover, which must be at least in the range of future precision landing ellipse dimensions (< 10km) in the case of the SFR collecting cached samples or even up to 20 km in the scenario where the SFR will have to do the sampling. In line with the above reported consideration and the requirement posed by ESA, the objectives of the SPARTAN activity are:
1) The reduction of the overall budgets required by the SFR navigation function while improving on its performances (i.e. accuracy of terrain reconstruction, probability to find paths) so to make the system compatible with the requirements of a long traverse range capability device.
2) The implementation of the developed computer vision algorithms (3D Reconstruction, Visual Odometry, and Visual SLAM) for rover navigation, using custom-designed vectorial processing (by means of FPGAs).
The objective of project INFRA is to research and develop novel technologies for personal digital support systems as part of an integral, secure emergency management system to support First Responders (FR) in crises occurring in Critical Infrastructures (CI) under all circumstances. Project INFRA will focus on innovation at 2 levels:
A. Create an open, standards based interoperability layer that will allow:
• Broadband access for high bandwidth applications (i.e. live video)
• Autonomous wireless broadband in underground tunnels and concrete buildings – a severe problem in CI sites such as Subway tunnels, targeted by terrorists.
• Full voice and data communication interoperability between all FR teams, their command posts and the CI site control centre
• Full interoperability of FR applications in use by the FR teams
B. Provide practical and useful novel applications for FR teams, all integrated within the open interoperability layer:
• Thermal imaging applications
• Video annotation
• Advanced fibre optic sensors
• Indoor navigation system
Both the communications interoperability layer and the FR applications in INFRA are novel and go well beyond the current state of the art for the technology currently in use by FR teams. Although the FR forces are quite fragmented and localised, achieving standardization on the issue of broadband applications for FR is of importance to all Europe, as it will allow significant cost reduction of FR equipment and cross region and cross border cooperation between FR units. In a similar manner, there is no standardization of CI sites. So FR teams cannot rely on a standardised environment that is common to all CI sites. This situation is quite typical in Europe and globally. Project INFRA will provide a major step towards a standard, seamless, effective and efficient FR environment, which will ensure interoperability with the CI control centre, will save lives and reduce the financial damages of catastrophic events in CI sites.
The project aims to develop a radically new robot locomotion technology that can effectively be used in a home and/or in a workplace environment for manipulating small object autonomously or in close cooperation with humans. Further more the robot could assist human occupants of the room by following spoken directions, or by offering assistance with their
movements or exercises. This new type of mobile robot will be designed to move fast and in any direction in an indoor environment. The main challenge is to navigate around any kind of obstacles such as stairs, doorsteps, the edge of carpets and various other everyday objects that can be found in a room in a generalized way. Additionally the workspace of the robot will be extended in the vertical direction and will thus be significantly larger than the current generation of service robots. For example, the robot will be able to operate on the top of tables, wardrobes and it could be used for manipulating objects placed on shelves, tables, work surfaces or on the floor.
In the event of an emergency due to a fire or other crisis, a necessary but time consuming pre-requisite, that could delay the real rescue operation, is to establish whether the ground can be entered safely by human emergency workers. The objective of the VIEW-FINDER project is to develop robots which have the primary task of gathering data. The robots are equipped with sensors that detect the presence of chemicals and, in parallel, image data is collected and forwarded to an advanced base station. The robots will be equipped with a wide array of chemical sensors, on-board TV/IR cameras, LADAR and other sensors to enhance scene understanding and reconstruction. At the base station the data is processed and combined with geographical information originating from a web of sources; thus providing the personnel leading the operation with in-situ processed data that can improve decision making. The information may also be forwarded to other forces involved in the operation (e.g. fire fighters, rescue workers, police, etc.). Beside the task specific sensors above, conventional sensors will be used to support navigation. The robots will be designed to navigate individually or cooperatively and to follow high-level instructions from the base station. The robots are off-the-shelf units, consisting of wheeled robots for the common fire ground and robotic caterpillars for specialised situations. The robots connect wirelessly to the base station and to each other; using a wireless selforganising network of mobile communication nodes (that is made up of other robots acting as communication routers and bridges) which adapts to the terrain. The robots are intended as the first explorers of the area, as well as in-situ supporters to act as safeguards to human personnel. The base station collects in-situ data and combines it with information retrieved from the large-scale GMES-information bases. It will be equipped with a sophisticated human interface to display the processed information to the human operators and operation command. The project aims to provide a proof-of-concept solutions, to be evaluated by a board of expert end-users that can verify that operational needs are addressed. Project workshops will be organised with the aim of further disseminating, and exploitating all results.
Summary of objectives:
1. Inspection of fire or crisis grounds and chemicals and toxin detection 2. Map building and scene reconstruction 3. Interfacing and fusing local command information and external information sources 4. Human Interface, integrating information search and robot control 5. Autonomous robot navigation and multi robot cooperation 6. Human-Robot cooperation and interaction.
The RESCUER project focuses on the development of a user-friendly intelligent mechatronic support to Improvised Explosive Devices Disposal and Explosive Ordnance Disposal. RESCUER provides an adequate and fast solution also for rescue operations like search of survivors in collapsed buildings and landslides after manmade or natural disasters.
RESCUER is an intelligent mechatronic system capable of achieving given goals under conditions of uncertainty. In contrast to existing automated bomb disarming systems, which are, by definition, pre-programmed to deliver given behaviour and are therefore predictable, RESCUER may arrive at specified goals in an unpredictable manner. This is possible due to RESCUER’s improved flexibility, dexterity and intelligence comparable to a human rescue specialist. Flexibility means the capability of responding to frequent changes in the environment without being re-configured. Dexterity means the enhanced perception and manipulation capabilities have never been used in Explosive, Chemical or Biological Threat Disposal or in humanitarian rescue operation while intelligence means the ability of RESCUER to identify the risk and to decide on proper action.
RESCUER is endowed with flexibility, which means it is capable of responding to frequent changes in the environment. This qualitative difference in RESCUER behaviour from exiting systems is the result of the separation of the domain knowledge from the mechanism dedicated for problem solving.
National Funded Projects
- “Hellenic Civil Unmanned Air Vehicle – HCUAV”, Φορέας χρηματοδότησης ΓΓΕΤ , ΣΥΝΕΡΓΑΣΙΑ, ΕΥΔ-ΕΤΑΚ 11ΣΥΝ9 629.
- “Ανάπτυξη και Υλοποίηση Νέων Αλγόριθμων Αναγνώρισης Προτύπων Βασισμένων σε Βιολογικά Εμπνευσμένα Μοντέλα και σε Ευφυή Συστήματα”, Φορέας χρηματοδότησης ΓΓΕΤ , ΠΕΝΕΔ.
- “Development of new techniques for recognition and categorization”, Greece-Slovenia, Joint Research and Technology Programmes.
The main project goal is the design and construction of a high-performance civil UAV, appropriately equipped, for long day and night surveillance/patrol operations across segments of Greek borders and over forests. Innovative hardware components together with clever software algorithms will be included for the performance optimization of flight, control, surveillance and data collection systems to provide a low-cost, highendurance, long-range and intelligent unmanned aerial surveillance system. A Ground Control Station (GCS) with appropriate instrumentation for controlling, monitoring and post-processing of the real-time collected information will be linked to the UAV. The UAV will be designed for missions primarily focused on:
• Broad area surveillance, on a 24h/7d basis patrol over segments of National borders.
• Forest regions surveillance, on a 24h/7d basis patrol operation.
Furthermore, the adopted design requirements will aim at extending the use of the UAV towards atmospheric data collection for cloud formation, aerosol, pollution/air quality measurements and weather forecast initialization data.
Το αντικείμενο του έργου στοχεύει στη δημιουργία κατάλληλων συνθηκών για ψηφιακή αποδελτιοποίηση εφημερίδων (press clipping) και εγγράφων. Μέχρι σήμερα η αποδελτιοποίηση γίνεται από έμπειρο προσωπικό, το οποίο καθημερινά μελετά τον τύπο και επιλέγει τα άρθρα τα οποία ενδιαφέρουν την πελατεία του. Στη συνέχεια τα έγραφα αυτά ψηφιοποιούνται σαν εικόνες και καταχωρούνται σε βάσεις δεδομένων. Η δεικτοθέτιση (indexing) των εγγράφων αυτών γίνεται προς το παρόν από το έμπειρο προσωπικό και δεν υπάρχει εμπορικά διαθέσιμο αυτοματοποιημένο σύστημα στον τομέα αυτόν. Μια τέτοια διαδικασία αυτοματοποίησης προϋποθέτει την επίλυση μια σειράς άλλων προβλημάτων, τα οποία παρουσιάζονται με σειρά προτεραιότητας:
1. Αυτόματη συναρμογή των τμημάτων των εφημερίδων, οι οποίες κατατμήθηκαν για να χωρούν στο σαρωτή, ώστε να είναι δυνατή η ψηφιοποίησή τους. 2. Αυτοματοποίηση της τμηματοποίησης της εικόνας, ώστε να διευκολυνθεί η διαδικασία της αναγνώρισης χαρακτήρων. Η αυτοματοποίηση αυτή περιλαμβάνει επιλογή καταλλήλων κατωφλίων, φίλτρων για απομάκρυνση θορύβου και τονισμό χαρακτηριστικών κλπ. 3. Εφαρμογή γρήγορων σθεναρών αλγορίθμων στην αναγνώριση χαρακτήρων. 4. Αναγνώριση των λέξεων κλειδιών σε ένα κείμενο, στο οποίο έχει γίνει οπτική αναγνώριση χαρακτήρων, ώστε να είναι δυνατή η δεικτοθέτισή του.
Nowadays and in the days to come, a large number of applications, including military, industrial, medical and civilian generate, and will continue to generate, gigabytes of colour images per day. As a result, there is a huge amount of information which cannot be accessed or made use of unless it is organized. This means that appropriate indexing is available in order to allow efficient browsing, searching and retrieving as in keyword searches of text databases. The common way to search is with the use of query by example, which means that the user has to present an image to the system and the latter searches for others alike by extracting features from the query image and comparing them to the ones stored in the database. The extraction of meaningful features is critical in content-based image retrieval (CBIR) and therefore an open and active field of research. The features usually employed by researchers are colour, texture, and shape. The main objective of this project is to design and implement a fast and robust image retrieval system which will rely mainly on the feature of colour enriched with spatial information. This is due to the fact that the experimental results of methods using colour have proved to be much more encouraging that those of methods using texture or shape. However, image retrieval is very application dependent so the performance of the methods using the features mentioned above lies in the eye of the beholder.