Systems
We design new algorithms and datasets that mitigate risks and promote positive social impact.
Relevant publications
- Q. Liu and M. Brandao, “Generating Environment-based Explanations of Motion Planner Failure: Evolutionary and Joint-Optimization Algorithms,” in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024.
[Abstract]
[PDF]
#transparency
Motion planning algorithms are important components of autonomous robots, which are difficult to understand and debug when they fail to find a solution to a problem. In this paper we propose a solution to the failure-explanation problem, which are automatically-generated environment-based explanations. These explanations reveal the objects in the environment that are responsible for the failure, and how their location in the world should change so as to make the planning problem feasible. Concretely, we propose two methods - one based on evolutionary optimization and another on joint trajectory-and-environment continuous-optimization. We show that the evolutionary method is well-suited to explain sampling-based motion planners, or even optimization-based motion planners in situations where computation speed is not a concern (e.g. post-hoc debugging). However, the optimization-based method is 4000 times faster and thus more attractive for interactive applications, even though at the cost of a slightly lower success rate. We demonstrate the capabilities of the methods through concrete examples and quantitative evaluation.
- K. Alsheeb and M. Brandao, “Towards Explainable Road Navigation Systems,” in IEEE International Conference on Intelligent Transportation Systems (ITSC), 2023.
[Abstract]
[Code]
[PDF]
#transparency
Road navigation systems are important systems for pedestrians, drivers, and autonomous vehicles. Routes provided by such systems can be unintuitive, and may not contribute to an improvement of users’ mental models of maps and traffic. Automatically-generated explanations have the potential to solve these problems. Towards this goal, in this paper we propose algorithms for the generation of explanations for routes, based on properties of the road networks and traffic. We use a combination of inverse optimization and diverse shortest path algorithms to provide optimal explanations to questions of the type "why is path A fastest, rather than path B (which the user provides)?", and "why does the fastest path not go through waypoint W (which the user provides)?". The explanations reveal properties of the map - such as speed limits, congestion and road closure - that are not compatible with users’ expectations, and the knowledge of which would make users prefer the system’s path. We demonstrate the explanation algorithms on real map and traffic data, and conduct an evaluation of the properties of the algorithms.
- Z. Zhou and M. Brandao, “Noise and Environmental Justice in Drone Fleet Delivery Paths: A Simulation-Based Audit and Algorithm for Fairer Impact Distribution,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023.
[Abstract]
[Code]
[PDF]
#fairness
#wellbeing
Despite the growing interest in the use of drone fleets for delivery of food and parcels, the negative impact of such technology is still poorly understood. In this paper we investigate the impact of such fleets in terms of noise pollution and environmental justice. We use simulation with real population data to analyze the spatial distribution of noise, and find that: 1) noise increases rapidly with fleet size; and 2) drone fleets can produce noise hotspots that extend far beyond warehouses or charging stations, at levels that lead to annoyance and interference of human activities. This, we will show, leads to concerns of fairness of noise distribution. We then propose an algorithm that successfully balances the spatial distribution of noise across the city, and discuss the limitations of such purely technical approaches. We complement the work with a discussion of environmental justice, showing how careless UAV fleet development and regulation can lead to reinforcing well-being deficiencies of poor and marginalized communities.
- M. E. Akintunde, M. Brandao, G. Jahangirova, H. Menendez, M. R. Mousavi, and J. Zhang, “On Testing Ethical Autonomous Decision-Making,” in Springer LNCS Festschrift dedicated to Jan Peleska’s 65th Birthday, 2023.
#fairness
- R. Eifler, M. Brandao, A. Coles, J. Frank, and J. Hoffman, “Evaluating Plan-Property Dependencies: A Web-Based Platform and User Study,” in Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS), 2022.
[Abstract]
[DOI]
[PDF]
#transparency
The trade-offs between different desirable plan properties - e.g. PDDL temporal plan preferences - are often difficult to understand. Recent work addresses this by iterative planning with explanations elucidating the dependencies between such plan properties. Users can ask questions of the form ’Why does the plan not satisfy property p?’, which are answered by ’Because then we would have to forego q’. It has been shown that such dependencies can be computed reasonably efficiently. But is this form of explanation actually useful for users? We run a large crowd-worker user study (N = 100 in each of 3 domains) evaluating that question. To enable such a study in the first place, we contribute a Web-based platform for iterative planning with explanations, running in standard browsers. Comparing users with vs. without access to the explanations, we find that the explanations enable users to identify better trade-offs between the plan properties, indicating an improved understanding of the planning task.
- M. Brandao and Y. Setiawan, “’Why Not This MAPF Plan Instead?’ Contrastive Map-based Explanations for Optimal MAPF,” in ICAPS 2022 Workshop on Explainable AI Planning (XAIP), 2022.
[Abstract]
[Code]
[PDF]
#transparency
Multi-Agent Path Finding (MAPF) plans can be very complex to analyze and understand. Recent user studies have shown that explanations would be a welcome tool for MAPF practitioners and developers to better understand plans, as well as to tune map layouts and cost functions. In this paper we formulate two variants of an explanation problem in MAPF that we call contrastive "map-based explanation". The problem consists of answering the question "why don’t agents A follow paths P’ instead?"—by finding regions of the map that would have to be an obstacle in order for the expected plan to be optimal. We propose three different methods to compute these explanations, and evaluate them quantitatively on a set of benchmark problems that we make publicly available. Motivations for generating this type of explanation are discussed in the paper and include both user understanding of MAPF problems, and designer-aids to guide the improvement of map layouts.
- M. Brandao, A. Coles, and D. Magazzeni, “Explaining Path Plan Optimality: Fast Explanation Methods for Navigation Meshes Using Full and Incremental Inverse Optimization,” in Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS), 2021, pp. 56–64.
[Abstract]
[Code]
[DOI]
[PDF]
#transparency
Path planners are important components of various products from video games to robotics, but their output can be counter-intuitive due to problem complexity. As a step towards improving the understanding of path plans by various users, here we propose methods that generate explanations for the optimality of paths. Given the question "why is path A optimal, rather than B which I expected?", our methods generate an explanation based on the changes to the graph that make B the optimal path. We focus on the case of path planning on navigation meshes, which are heavily used in the computer game industry and robotics. We propose two methods - one based on a single inverse-shortest-paths optimization problem, the other incrementally solving complex optimization problems. We show that these methods offer computation time improvements of up to 3 orders of magnitude relative to domain-independent search-based methods, as well as scaling better with the length of explanations. Finally, we show through a user study that, when compared to baseline cost-based explanations, our explanations are more satisfactory and effective at increasing users’ understanding of problems.
- M. Brandao, G. Canal, S. Krivic, and D. Magazzeni, “Towards providing explanations for robot motion planning,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 3927–3933.
[Abstract]
[DOI]
[PDF]
#transparency
Recent research in AI ethics has put forth explainability as an essential principle for AI algorithms. However, it is still unclear how this is to be implemented in practice for specific classes of algorithms - such as motion planners. In this paper we unpack the concept of explanation in the context of motion planning, introducing a new taxonomy of kinds and purposes of explanations in this context. We focus not only on explanations of failure (previously addressed in motion planning literature) but also on contrastive explanations - which explain why a trajectory A was returned by a planner, instead of a different trajectory B expected by the user. We develop two explainable motion planners, one based on optimization, the other on sampling, which are capable of answering failure and constrastive questions. We use simulation experiments and a user study to motivate a technical and social research agenda.
- M. Brandao, “Socially Fair Coverage: The Fairness Problem in Coverage Planning and a New Anytime-Fair Method,” in 2021 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO), 2021, pp. 227–233.
[Abstract]
[DOI]
[PDF]
#fairness
In this paper we investigate and characterize social fairness in the context of coverage path planning. Inspired by recent work on the fairness of goal-directed planning, and work characterizing the disparate impact of various AI algorithms, here we simulate the deployment of coverage robots to anticipate issues of fairness. We show that classical coverage algorithms, especially those that try to minimize average waiting times, will have biases related to the spatial segregation of social groups. We discuss implications in the context of disaster response, and provide a new coverage planning algorithm that minimizes cumulative unfairness at all points in time. We show that our algorithm is 200 times faster to compute than existing evolutionary algorithms - while obtaining overall-faster coverage and a fair response in terms of waiting-time and coverage-pace differences across multiple social groups.
- M. Brandao and D. Magazzeni, “Explaining plans at scale: scalable path planning explanations in navigation meshes using inverse optimization,” in IJCAI 2020 Workshop on Explainable Artificial Intelligence (XAI), 2020.
[Abstract]
[PDF]
#transparency
In this paper we propose methods that provide explanations for path plans, in particular those that answer questions of the type "why is path A optimal, rather than path B which I expected?". In line with other work in eXplainable AI Planning (XAIP), such explanations could help users better understand the outputs of path planning methods, as well as help debug or iterate the design of planners and maps. By specializing the explanation methods to path planning, using optimization-based inverse-shortest-paths formulations, we obtain drastic computation time improvements relative to general XAIP methods, especially as the length of the explanations increases. One of the claims of this paper is that such specialization might be required for explanation methods to scale and therefore come closer to real-world usability. We propose and evaluate the methods on large-scale navigation meshes, which are representations for path planning heavily used in the computer game industry and robotics.
- M. Brandao, “Fair navigation planning: a humanitarian robot use case,” in KDD 2020 Workshop on Humanitarian Mapping, 2020.
[Abstract]
[arXiv]
[PDF]
#fairness
In this paper we investigate potential issues of fairness related to the motion of mobile robots. We focus on the particular use case of humanitarian mapping and disaster response. We start by showing that there is a fairness dimension to robot navigation, and use a walkthrough example to bring out design choices and issues that arise during the development of a fair system. We discuss indirect discrimination, fairness-efficiency trade-offs, the existence of counter-productive fairness definitions, privacy and other issues. Finally, we conclude with a discussion of the potential of our methodology as a concrete responsible innovation tool for eliciting ethical issues in the design of autonomous systems.
- M. Brandao, M. Jirotka, H. Webb, and P. Luff, “Fair navigation planning: a resource for characterizing and designing fairness in mobile robots,” Artificial Intelligence (AIJ), vol. 282, 2020.
[Abstract]
[DOI]
[PDF]
#fairness
In recent years, the development and deployment of autonomous systems such as mobile robots have been increasingly common. Investigating and implementing ethical considerations such as fairness in autonomous systems is an important problem that is receiving increased attention, both because of recent findings of their potential undesired impacts and a related surge in ethical principles and guidelines. In this paper we take a new approach to considering fairness in the design of autonomous systems: we examine fairness by obtaining formal definitions, applying them to a system, and simulating system deployment in order to anticipate challenges. We undertake this analysis in the context of the particular technical problem of robot navigation. We start by showing that there is a fairness dimension to robot navigation, and we then collect and translate several formal definitions of distributive justice into the navigation planning domain. We use a walkthrough example of a rescue robot to bring out design choices and issues that arise during the development of a fair system. We discuss indirect discrimination, fairness-efficiency trade-offs, the existence of counter-productive fairness definitions, privacy and other issues. Finally, we elaborate on important aspects of a research agenda and reflect on the adequacy of our methodology in this paper as a general approach to responsible innovation in autonomous systems.
- M. Brandao, “Moral Autonomy and Equality of Opportunity for Algorithms in Autonomous Vehicles,” in Envisioning Robots in Society: Power, Politics, and Public Space—Proceedings of Robophilosophy 2018, 2018, vol. 311, pp. 302–310.
[Abstract]
[DOI]
[PDF]
#fairness
This paper addresses two issues with the development of ethical algorithms for autonomous vehicles. One is that of uncertainty in the choice of ethical theories and utility functions. Using notions of moral diversity, normative uncertainty, and autonomy, we argue that each vehicle user should be allowed to choose the ethical views by which the vehicle should act. We then deal with the issue of indirect discrimination in ethical algorithms. Here we argue that equality of opportunity is a helpful concept, which could be applied as an algorithm constraint to avoid discrimination on protected characteristics.