HUME's research programme aims to make advances towards its vision of the future where people are taken out of harsh, extreme environments, and replaced by teams of smart robots able to do the 'dirty and dangerous jobs,' collaborating seamlessly as a team with each other and with human operators on shore. In this setting, remote data collection, fusion, and interpretation become central, together with the ability to generate transparent, explainable, and safe actionable decisions from this data. This requires a framework that enables humans and machines to work seamlessly as a team by establishing and maintaining a single shared view of the world, and each other's intents through transparent interaction, robust to a highly dynamic and unpredictable maritime environments.
To do so, HUME addresses fundamental research questions in the field of machine-machine and human-machine collaboration, robot perception, explainable autonomy, and AI, by addressing three main objectives:
Enhance teaming through research and development of robotic collaboration algorithms for human-machine, machine-machine and human-machine-machine interfaces through establishing situation awareness including a shared understanding of the task at hand, the environment, and the operator needs.
Extend robotic perception through advanced machine learning methods for environment semantic tagging and real-time sensor performance estimation, with the view to feed this into the enhanced human-machine-machine teaming.
Explain reasoning behind the robotic perception and autonomy, providing greater transparency, which will in turn increase operator confidence and ultimately adoption.
These objectives are addressed through a three project challenges and associated research questions.
This challenge focuses on advancing the capabilities of multi-autonomy coordination and communication within the HUME programme, by addressing two research questions:
Multi-Party Teaming: Investigating how multiple autonomous systems can effectively negotiate responsibilities and tasks.
Robotic Communications: Exploring efficient data and information sharing methods among robot systems to maintain a unified understanding of operational environment and mission objectives.
This challenge advances underwater perception through exploration of three main questions:
Continual Learning: How can real and simulated data be exploited by learning-based algorithms to improve performance and sensor reliability?
Common Scene Understanding: How can different sensing modalities be paired and used to enable true multi-robot collaboration?
Quality Assurance: How can quality assurance metrics be employed in autonomous systems to improve perception capabilities?
This challenge focuses on interactions between operators and robot systems by addressing two key questions:
Joint Optimisation of User and System Objectives: How can both the system and user’s objectives and actions be used to maintain situation awareness? Does a personalised, human-in-the-loop approach help to manage user trust and facilitate human-robot teaming, thus increasing adoption?
Explainable Deep Machine Learning for Robot Perception: How can explanations of complex machine learning and other decision-making algorithms performing robot perception be generated?