Programme

Speakers and tentative schedule
Date: June 2, 2023
Start: 2:00 pm
End: 6:00 pm

  • [02:00 – 02:15] Opening remarks
    • Organisers
  • [02:15 – 02:35] Invited talk 1
    • Speaker: Katerina Fragiadaki, Assistant Professor
    • Bio: Katerina Fragkiadaki is an Assistant Professor in the Machine Learning Department at Carnegie Mellon University. She received her Ph.D. from the University of Pennsylvania and was a postdoctoral fellow at UC Berkeley and Google research after that.  Her work is on learning visual representations with little supervision and combining spatial reasoning in deep visual learning.  Her group develops algorithms for mobile computer vision,  learning of physics, and common sense for agents that move around and interact with the world.  Her work has been awarded a best Ph.D. thesis award,  an NSF CAREER award, an AFOSR Young Investigator award, a DARPA Young Investigator award, Google, TRI, Amazon, UPMC, and Sony faculty research awards.
    • Tentative title: Active Vision for Robot Manipulation and Scene Re-arrangement
  • [02:35 – 02:55] Invited talk 2
    • Speaker: John Tsotsos, Professor at York University, Canada
    • Bio: John Tsotsos is a Distinguished Research Professor of vision science at York University, Canada. After a Post-Doctoral Fellowship in cardiology at Toronto General Hospital, he joined the University of Toronto in the Faculty of Computer Science and Medicine. In 1980, he founded the Computer Vision Group at the University of Toronto, which he led for 20 years. He was recruited to York University, in 2000, as the Director of the Centre for Vision Research. His current research interest includes the comprehensive theory of visual attention in humans. A practical outlet for this theory embodies elements of the theory into the vision systems of mobile robots.
    • Tentative title: Active visual sampling by aligning sensor and scene geometry
  • [02:55 – 03:15] Invited talk 3
    • Speaker: Yulia Sandamirskaya, Senior Researcher
    • Bio: Yulia Sandamirskaya leads the Applications Research team of the Neuromorphic Computing Lab at Intel. Her team in Munich develops spiking neuronal network-based algorithms for neuromorphic hardware to demonstrate the potential of neuromorphic computing in robotics. She has 15 years of research experience in neural dynamics, embodied cognition, and autonomous robotics.  She led a research group “Neuromorphic Cognitive Robots” at the Institute of Neuroinformatics of the University of Zurich and ETH Zurich, Switzerland, and the “Autonomous learning” group at the Institute for Neural Computation at the Ruhr-University Bochum.
    • Tentative title: Active perception and learning in neuromorphic systems: from events to action plans and back
  • [03:15 – 03:35] Invited talk 4
    • Speaker: Davide Scaramuzza, Professor
    • Bio: Davide Scaramuzza is a Professor of Robotics and Perception at the University of Zurich, where he does research at the intersection of robotics, computer vision, and machine learning. He did his Ph.D. at ETH Zurich, a postdoc at the University of Pennsylvania, and was visiting professor at Stanford University. His research focuses on autonomous, agile navigation of micro drones using both standard and neuromorphic event-based cameras. He pioneered autonomous, vision-based navigation of drones, which inspired the navigation algorithm of the NASA Mars helicopter. He has been serving as a consultant for the United Nations on topics such as disaster response and disarmament, as well as the Fukushima Action Plan on Nuclear Safety. He won many prestigious awards, such as a European-Research-Council Consolidator grant, the IEEE Robotics and Automation Society Early Career Award, an SNF-ERC Starting Grant, a Google Research Award, a Facebook Distinguished Faculty Research Award, two NASA TechBrief Awards, and many paper awards. In 2015, he co-founded Zurich-Eye, today Facebook Zurich, which developed the world-leading virtual-reality headset, Oculus Quest, which sold over 10 million units. In 2020, he co-founded SUIND, which builds autonomous drones for precision agriculture. Many aspects of his research have been prominently featured in broader media, such as The New York Times, The Economist, Forbes, BBC News, and Discovery Channel.
    • Tentative title: Perception-aware planning and control
  • [03:35 – 03:55] Paper presenters / Coffee break
  • [03:55 – 04:15] Invited talk 5
    • Speaker: Alexandre Bernardino, Associate Professor
    • Bio: Alexandre Bernardino is a tenured Associate Professor at the Department of Electrical and Computer Engineering and Senior Researcher at the Computer and Robot Vision Laboratory of the Institute for Systems and Robotics at IST, the engineering faculty of Lisbon University. His main research interests include applying computer vision, machine learning, cognitive science, and control theory to advanced robotics and automation systems.
    • Tentative title: Active Semantic Foveal Vision
  • [04:15 – 04:35] Invited talk 6
    • Speaker: Michael Milford, Professor
    • Bio: Professor Michael Milford is a multi-award-winning educational entrepreneur who conducts interdisciplinary research at the boundary between robotics, neuroscience, and computer vision. His research models the neural mechanisms in the brain underlying tasks like navigation and perception to develop new technologies in challenging application domains such as all-weather and anytime positioning for autonomous vehicles. He has led or co-led projects collaborating with leading global organizations, including Amazon, Google, Intel, Ford, Rheinmetall, Air Force Office of Scientific Research, NASA, Harvard, Oxford, and MIT. From 2022 – 2027 he is leading a large research team combining bio-inspired and computer science-based approaches to provide a ubiquitous alternative to GPS that does not rely on satellites. He currently holds the positions of Australian Research Council Laureate Fellow, Joint Director of the QUT Centre for Robotics and QUT Professor of Robotics
    • Tentative title: Closing the loop on localization: Active navigation for adversity- and adversarial-robustness
  • [04:35 – 04:55] Invited talk 7
    • Speaker: Guido de Croon, Professor
    • Bio: Received his M.Sc. and Ph.D. in Artificial Intelligence (AI) at Maastricht University, the Netherlands. His research interest lies in computationally efficient, bio-inspired algorithms for robot autonomy, emphasizing computer vision. Since 2008 he has worked on algorithms for achieving autonomous flight with small and lightweight flying robots, such as the DelFly flapping wing MAV. From 2011-2012, he was a research fellow in the Advanced Concepts Team of the European Space Agency, where he studied topics such as optical-flow-based control algorithms for extraterrestrial landing scenarios. After his return to TU Delft, his work has included the fully autonomous flight of a 20-gram DelFly, a new theory on active distance perception with optical flow, and a swarm of tiny drones able to explore unknown environments. More recently, he proposed an explanation for how insects actively determine their orientation with respect to the gravity direction. Currently, he is a Full Professor at TU Delft and scientific lead of the Micro Air Vehicle lab (MAVLab) of the Delft University of Technology.
    • Tentative title: Weird cases of active vision in optical flow control
  • [04:55 – 05:15] Invited talk 8
    • Speaker: Kostas Alexis, Professor
    • Bio: Kostas Alexis is a Full Professor at the Department of Engineering Cybernetics of the Norwegian University of Science and Technology (NTNU). Highlights of his research include leading Team CERBERUS winning the DAPRA Subterranean Challenge and a host of contributions in the domain of resilient robotic autonomy – in perception, planning, and control, including learned navigation policies. Earlier research has included contributions in setting the endurance world record for UAVs below 50kg class with AtlantikSolar flying continuously for 81.5 hours. Since becoming a professor, initially in the US and then in Norway, he has been the PI for a host of grants from NSF, DARPA, NASA, DOE, USDA, Horizon Europe, the Research Council of Norway, and other public and private sources.
    • Tentative title: Combined deep collision prediction and informative navigation for aerial Robots
  • [05:15 – 05:45] Panel discussion
  • [05:45 – 06:00] Closing remarks