Prof. Darwin G Caldwell
Fellow of IEEE
Founding Director, Italian Institute of Technology (IIT), Genoa, Italy
Director of the Dept. of Advanced Robotics (ADVR) at IIT
ππ₯π’ππ€ π‘ππ«π ππ¨ π«πππ ππ‘π πππ¬ππ«πππ
DarwDarwin G CaldwellΒ isΒ Founding DirectorΒ of the Italian Institute of Technology (IIT) in Genoa Italy, and Director of the Dept. of Advanced Robotics (ADVR) at IIT. He has pioneered developments in compliant and variable impedance actuation, Soft and Human Friendly Robotics and the creation of ‘softer’, safer robots, that draw on developments in materials, mechanisms, sensing, actuation and software. These developments have been fundamental to advances in humanoids, quadrupeds, medical robotics and exoskeletons. Key robots developed by his team include: iCub, a child-sized humanoid robot; COMAN, a controllably compliant humanoid designed to safely interact with people and have more natural (loco)motion; WALK-MAN, a 1.85m tall, 120kg humanoid that competed in the DARPA Robotics Challenge; the HyQ series (HyQ, HyQ2Max, HyQ-Real) of high performance hydraulic quadrupedal robots; and PHOLUS/Centauro, a human-robot symbiotic system capable of robust locomotion and dexterous manipulation in rough terrain and harsh environments. In addition to his research in legged robots, Prof. Caldwell also works extensively to develop wearable and haptic systems including whole body exoskeletons such as the XoSoft, XoTrunk, XoShoulder and XoElbow and in surgical and rehabilitation robotics where his team have developed systems such as the CALM (Computer Aided Laser Microsurgery) systems, the Cathbot, Cathbot-Pro and SVEI (for catherization and tissue type detection) and the Arbot (Ankle rehabilitation robot).
Caldwell is or has been an Honorary professor at the Universities of Manchester, Sheffield, Bangor and King’s College London in the UK, and Tianjin University in China. He has published over 700 papers, has over 25 patents and has received over 50 awards/nominations at international conferences and events. He is a Fellow of the Royal Academy of Engineering (FREng – British National Academy), the IEEE (FIEEE) and a Member of the Academia Europaea (MAE – Academy of Europe).
Prof. Ravinder S. Dahiya
FIEEE, FRSE
Professor, Dept. of ECE, Northeastern University, Boston, USA
Lead, Bendable Electronics & Sustainable Technologies Group
ππ₯π’ππ€ π‘ππ«π ππ¨ π«πππ ππ‘π πππ¬ππ«πππ
Education
- PhD, Italian Institute of Technology, 2009
Honors & Awards
Microelectronic Engineering Young Investigator Award, 2016.
Fellow, IEEE
Fellow, The Royal Society of Edinburgh
Life Member, Marie Curie Fellows Association
Fellow, The Institution of Engineers in Scotland
Prof. Lionel P. Robert Jr.
Professor, School of Information
Professor, College of Engineering Robotics Department
Affiliate Faculty, National Center for Institutional Diversity
Affiliate Faculty, IU Center for Computer-Mediated Communication
Director of MAVRIC
ππ₯π’ππ€ π‘ππ«π ππ¨ π«πππ ππ‘π πππ¬ππ«πππ
Robot teammates like human teammates make mistakes that undermine trust. Yet, trust is vital to promoting human and robot collaboration. Therefore, it is critical to understand how human trust in robots can be repaired. To address this, two studies were conducted. In the first study,Β 240 participants were recruited to assess the overall effectiveness of four robot trust repair strategies: promises, explanations, denials and apologies. In the online study, the robot and participant worked in a warehouse to pick and load 10 boxes onto a deliveryΒ truck. The robotΒ made 3 mistakes over the course of the task and employed one of the four repairΒ strategies after each mistake. Participants were asked to rate the robot’s ability, integrity and benevolence at the end of the task to determine the effectiveness of each repair strategy. In the second study, 100Β participants were recruited to assess the same four robot trust repair strategies: promises, explanations, denials or apologies. However, participants were asked to rate their trust in the robot before the robot made the mistake and after trust repairΒ strategies were employed. This was done after each of the three mistakes to determine if a particular trust repair strategy was more or less effective after the first, second or third mistake. Taken together, both studies contribute to the literature on human-robot trust repair.Β Β