Acroboter Project


  The project aims to develop a radically new robot locomotion technology that can effectively be used in a home and/or in a workplace environment for manipulating small object autonomously or in close cooperation with humans. Further more the robot could assist human occupants of the room by following spoken directions, or by offering assistance with their
movements or exercises. This new type of mobile robot will be designed to move fast and in any direction in an indoor environment. The main challenge is to navigate around any kind of obstacles such as stairs, doorsteps, the edge of carpets and various other everyday objects that can be found in a room in a generalized way. Additionally the workspace of the robot will be extended in the vertical direction and will thus be significantly larger than the current generation of service robots. For example, the robot will be able to operate on the top of tables, wardrobes and it could be used for manipulating objects placed on shelves, tables, work surfaces or on the floor.
The vision system of the project is responsible for several tasks such as the 3D reconstruction of the working space, the object recognition and the pose estimation of the swinging platform.

As far as the object recognition task is concerned, the whole process is divided into four phases. In the first phase, which could be apprehended as the training session of the system, each object is stored as an image in a large database. The later contains several objects’ images captured under varying illumination and geometrical conditions. The most essential issue that one should keep in mind is the fact that, the database is constructed while the system remains off-line. As a result, it this phase constitute a non-time critical module. In the second phase, the SIFT’s matching sub-procedure is adopted in order to obtain common features between a possible scene and an object. Especially, during this phase the SIFT features that are common in the scene and in object’s image are stored for further exploitation in the following phases. During the third phase, SIFT is expanded in order to estimate the full pose of a recognized object. Initially, the object’s distance from the camera is calculated by taking into account essential geometrical data obtained during the training session (first phase). The information extracted during this phase can be used to allow the robot to manipulate an object or avoid moving into it. In the last phase, an online search engine is constructed. It provide user friendly interface that allows users to search a scene for a specific trained (already belonging to the database) object. Furthermore, the construction of the search engine was chosen to provide vital spatial data (orientation) of a recognized object.

Back to Research Topics