Keynote Speakers

 

Prof. Danwei Wang, Nanyang Technological University, Singapore

Life Fellow of IEEE

 

Biography: WANG, Danwei is Fellow, Academy of Engineering Singapore, Life Fellow of IEEE, Fellow, AvH (Germany), ST Engineering Distinguished Professor, Distinguished Lecturer of IEEE Robotics and Automation Society. He is a professor, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. He is an Editor, IEEE IROS (International Conference on Intelligent Robotics and Systems) from 2019 to 2022. He has published 6 books, 7 book chapters, 10 patents (5 published and 5 filed) and over 500 technical papers and articles in international refereed journals and conferences.

Speech Title: Deep-Tech Enabled Environmental Services

Abstract: Deep-Tech has matured sufficiently for many applications in society and industries and it helps in producing societal and economic benefits. Environmental services have been viewed as low-tech industry and young people tend to avoid. This talk describes the applications of deep-techs in this field, including three aspects: (1) autonomous navigation for street sweepers; (2) high-fidelity teleoperation; and (3) cybersecurity of sensors. We shall describe the deep-tech applications from systems point of view, pros and cons, and present a lot of implementations. Finally, we shall discuss the future of deep-tech applications to environmental services.


 

Prof. Youfu Li, City University of Hong Kong, China

IEEE Fellow

 

Biography: Prof. Li received the B.S. and M.S. degrees in electrical engineering from Harbin Institute of Technology China. He obtained the PhD degree from the Robotics Research Group, Dept of Engineering Science of the University of Oxford in 1993. From 1993 to 1995 he was a postdoctoral research staff in the Dept of Computer Science, University of Wales, Aberystwyth, UK. He joined City University of Hong Kong in 1995 and is currently a professor in the Department of Mechanical Engineering. His research interests include robot sensing, robot vision, 3D vision, visual tracking, sensor guided manipulation, mechatronics and automation. In these areas, he has published over 180 papers in SCI listed international journals. Dr Li has received many awards in robot sensing and vision including IEEE Sensors Journal Best Paper Award by IEEE Sensors Council, Second Prize of Natural Science Research Award by the Ministry of Education, 1st Prize of Natural Science Research Award of Hubei Province, 1st Prize of Natural Science Research Award of Zhejiang Province, China. He was on Top 2% of the world’s most highly cited scientists by Stanford University, 2020. He has served as an Associate Editor of IEEE Transactions on Automation Science and Engineering (T-ASE), Associate Editor of IEEE Robotics and Automation Magazine (RAM), Editor of the IEEE Robotics Automation Society's Conference Editorial Board (CEB) and Guest Editor of IEEE Robotics and Automation Magazine (RAM). He is a fellow of IEEE.

Speech Title: Visual sensing and Calibration for Robotic Applications

Visual sensing is important to many engineering applications including tracking for robotics. In this talk, I will present our research in visual sensing and tracking focusing on the issues in the calibration. For robotic applications, visual sensing in 3D is often needed, but the calibration remains tedious and inflexible with traditional approach. To this end, we have investigated the relevant issues for different visual sensing systems. A flexible calibration method desires the vision system parameters to be recalibrated automatically or with less operator interference whenever the configuration of the system is changed, but practically this is often hard to achieve. Various attempts were made in our previous works to enhance the flexibility in the visual sensing calibration. I will present some them including the work on passive and active visual sensing systems. A case study to present is that of gaze tracking where the issues in the parallax errors and the tedious calibration procedures are addressed with our new calibration method developed. Instead of successively fixating at a grid of calibration points, the user looks at a point while rotating his head, resulting in densely distributed calibration points over a wider field of view, enabling flexible calibration. When the calculated 3-D POG in the scene camera coordinates is transformed to the world coordinates, it can be used in human-robot interactions for manipulation tasks.


 

Prof. Danica Kragic, Royal Institute of Technology, Sweden

IEEE Fellow

 

Biography: Danica Kragic is a Professor at the School of Computer Science and Communication at the Royal Institute of Technology, KTH. She received MSc in Mechanical Engineering from the Technical University of Rijeka, Croatia in 1995 and PhD in Computer Science from KTH in 2001. She has been a visiting researcher at Columbia University, Johns Hopkins University and INRIA Rennes. She is the Director of the Centre for Autonomous Systems. Danica received the 2007 IEEE Robotics and Automation Society Early Academic Career Award. She is a member of the Royal Swedish Academy of Sciences, Royal Swedish Academy of Engineering Sciences and Young Academy of Sweden. She holds a Honorary Doctorate from the Lappeenranta University of Technology. Her research is in the area of robotics, computer vision and machine learning. She received ERC Starting, Advanced and Synergy Grant. Her research is supported by the EU, Knut and Alice Wallenberg Foundation, Swedish Foundation for Strategic Research and Swedish Research Council. She is an IEEE Fellow.

Speech Title: Multimodal learning for robots

To be deployed in unstructured environments, robots need the ability to learn motor skills autonomously, through continuous interaction with the environment, humans and other robots. Although classically built on rigorous control theory, mathematical and theoretical computer science methodologies, more recently data-driven learning methods, such as Deep Learning (DL) and Reinforcement Learning (RL) have been demonstrated as powerful technologies for developing AS. Still, most of the practical applications exist in carefully structured settings where i) there exists enough data to train the models, ii) one can structure the search problem efficiently, and iii) there is a computational infrastructure and/or many robots to run large scale experiments.
A robot perceives the world with a multitude of sensors, is expected to build and update a model of the world incrementally and continuously over time and use it to make decisions, plan actions, fulfill useful tasks and generalize between tasks that are similar in nature. For robots to become safe and robust, it is essential to understand how learning algorithms for discrete sequential decision-making can interact with continuous physics based dynamics. In our recent work, we build upon two major recent developments in the field, Diffusion Policies for visuomotor manipulation and large pre-trained multimodal foundational models to obtain a robotic skill learning system. The system can obtain new skills via the behavioral cloning approach of visuomotor diffusion policies given teleoperated demonstrations. Foundational models are used to perform skill selection given the user's prompt in natural language. The talk will present and summarize open challenges in relation to teh above.