Open positions

There are the following positions available:

3 PostDoc positions on the project Trust in Human-Machine Partnership

THuMP: Trust in Human-Machine Partnership is a multi-disciplinary 3-year project, with the ambitious goal of advancing the state-of-the-art in trustworthy human-AI decision-support systemsb>. ThUMP will address the technical challenges involved in creating explainable AI (XAI) systems, with a focus on Explainable Planning and Computational Argumentation, so that people using the system can better understand the rationale behind and trust suggestions made by an AI system. This project is conducted in collaboration with three project partners: Save the Children and Schlumberger, which provide use cases for the project, and the law firm Hogan Lovells, which will cooperate in considering legal implications of enhancing machines with transparency and the ability to explain.

There are three positions available (deadline for applications: 11 February 2019)

  • 1 PostDoc who will be responsible for leading the design of new techniques for explainable planning, based on the use of planning and/or constraint programming and/or temporal logic and/or formal methods, and their applications to the use cases that will be co-created with the project partners (Apply here!).

  • 1 PostDoc who will be responsible for conducting research into models of computational argumentation that take into account trust and provenance. Tasks will involve developing and implementing formal models of argumentation and dialogue, in particular models that are grounded in data, and evaluating these models (Apply here!).

  • 1 PostDoc who will be responsible for conducting research around decision support in human-machine teams, particularly for resource allocation. Tasks will involve building software infrastructure for the project, developing a prototype interface for communicating with users, designing and conducting experiments with human subjects based on the use cases that will be co-created with the project partners, with an emphasis on resource allocation in critical domains (Apply here!).


1 PostDoc on AI Planning for Robotics

This is a 12-month position, on the topic of KCL/NASA Collaboration on Planning Technologies for In-Vehicle Robotics. This project is funded by King's College London. The ambitious goal of this project is to develop new advanced functionalities for the framework ROSPlan in order to allow its use for robot task planning for robotic assistants used inside human spacecraft.
ROSPlan is a framework developed at King's College London, and now used worldwide, that provides a collection of tools for Artificial Intelligence Planning in ROS systems. NASA and ESA are planning to launch new robots on board the ISS, for assisting the astronauts, and this project will enable ROSPlan to be used for the control of these robots. These robotic assistants include CIMON (developed for ESA), Astrobee (developed for NASA), and other in-vehicle robots are expected to follow.
The candidate will join the team led by Dr. Daniele Magazzeni, working on the broader project on AI Planning for Robotics and Autonomous Systems. The candidate will also be involved in the co-supervision of PhD students and master students working on this project. The candidate is expected to have a track record in AI. Expertise on Planning, ROSPlan and Planning for Robotics would be a plus. The candidate is expected to have a track record in AI. Expertise on Planning, ROSPlan and Planning for Robotics would be a plus.
Deadline for applications: 18 January 2019. Apply here!


1 PhD Studentship on Explainable Planning, in collaboration with Schlumberger

This is a 4-year scholarship, co-funded by ESPRC and Schlumberger, covering stipend as well as tuition fees. The research topics will be around Explainable Planning and Planning for Autonomous Systems. Email me if interested.