A key area in robotics research is concerned with developing social robots for assisting people in everyday tasks. Now that robots are moving into public places or homes, they must interact with people in an intuitive manner at the same time that they are developing other tasks. To achieve this intuitive interaction, it is interesting that the robot will be able to perceive the real world in a similar way that people do, extracting from the sensed data a set of internal representations which will be consistent with the human representations.
In this talk, we will focus on the study of the robot’s visual perception system. The central part of this system will be an attention mechanism which must be able to discriminate, from all the information provided by the robot’s sensors, the most relevant elements needed to carry out the currently executed tasks. Typically, this is conducted generating a map for each sensed feature, which contains high values for interesting regions and lower values for other regions. In our case, feature maps will be also weighted by the currently executed tasks. Thus, the importance of a sensed feature not only depends on its own value into the corresponding map, but also on the tasks to carry out. The perception system will allow the robot to detect distinguished visual landmarks in an initially unknown environment, generating a hierarchical map which fuses these landmarks into a high-level topological map and to detect and capture the upper-body motion of people interested in interact with the robot, providing information about who is the
person and what gesture is the person doing. Both mid-level sources of information will allow the robot to perceive the surrounding environment, and it will be acquired and updated at different rates depending on the sequence of executed tasks. However, mobile cognitive social robots not only need to have these internal representations to be useful. Thus, for a social robot it is also crucial that its internal representations will be consistent with the human representations. An on-line learning process will be conducted to incorporate to the internal
Dr. Antonio Bandera has developed his research activities in the fields of computer vision, robotics and pattern recognition. He has published more than 45 papers in international journals (27 cited in the Journal Citation Report), and has got more than 60 contributions on national or international conferences. Nowadays, he is the main researcher of two projects funded by the Spanish Government, and of an integrated action with the PRIP of TU Vienna. Finally, he is the academic coordinator of a Master course on Electronic Technologies for Smart Environments, organized by the Malaga University.