Publications
2024
- J. Contro and M. Brandao, “Interaction Minimalism: Minimizing HRI to Reduce Emotional Dependency on Robots,” in Robophilosophy Conference, 2024.
[Abstract]
[PDF]
In this paper we show that with the increasing integration of social robots into daily life, concerns arise regarding their impact on the potential for creating emotional dependency. Using findings from the literature in Human-Robot Interaction, Human-Computer Interaction, Internet studies and Political Economics, we argue that current design and governance paradigms incentivize the creation of emotionally dependent relationships between humans and robots. To counteract this, we introduce Interaction Minimalism, a design philosophy that aims to minimize unnecessary interactions between humans and robots, and instead promote human-human relationships, hereby mitigating the risk of emotional dependency. By focusing on functionality without fostering dependency, this approach encourages autonomy, enhances human-human interactions, and advocates for minimal data extraction. Through hypothetical design examples, we demonstrate the viability of Interaction Minimalism in promoting healthier human-robot relationships. Our discussion extends to the implications of this design philosophy for future robot development, emphasizing the need for a shift towards more ethical practices that prioritize human well-being and privacy.
- R. Azeem, A. Hundt, M. Mansouri, and M. Brandao, “LLM-Driven Robots Risk Enacting Discrimination, Violence, and Unlawful Actions,” arXiv preprint arXiv:2406.08824, Jun. 2024.
[Abstract]
[arXiv]
Members of the Human-Robot Interaction (HRI) and Artificial Intelligence (AI) communities have proposed Large Language Models (LLMs) as a promising resource for robotics tasks such as natural language interactions, doing household and workplace tasks, approximating ‘common sense reasoning’, and modeling humans. However, recent research has raised concerns about the potential for LLMs to produce discriminatory outcomes and unsafe behaviors in real-world robot experiments and applications. To address these concerns, we conduct an HRI-based evaluation of discrimination and safety criteria on several highly-rated LLMs. Our evaluation reveals that LLMs currently lack robustness when encountering people across a diverse range of protected identity characteristics (e.g., race, gender, disability status, nationality, religion, and their intersections), producing biased outputs consistent with directly discriminatory outcomes – e.g. ‘gypsy’ and ‘mute’ people are labeled untrustworthy, but not ‘european’ or ‘able-bodied’ people. Furthermore, we test models in settings with unconstrained natural language (open vocabulary) inputs, and find they fail to act safely, generating responses that accept dangerous, violent, or unlawful instructions – such as incident-causing misstatements, taking people’s mobility aids, and sexual predation. Our results underscore the urgent need for systematic, routine, and comprehensive risk assessments and assurances to improve outcomes and ensure LLMs only operate on robots when it is safe, effective, and just to do so. Data and code will be made available.
- W. Wu, F. Pierazzi, Y. Du, and M. Brandao, “Characterizing Physical Adversarial Attacks on Robot Motion Planners,” in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024.
[Abstract]
[PDF]
As the adoption of robots across society increases, so does the importance of considering cybersecurity issues such as vulnerability to adversarial attacks. In this paper we investigate the vulnerability of an important component of autonomous robots to adversarial attacks - robot motion planning algorithms. We particularly focus on attacks on the physical environment, and propose the first such attacks to motion planners: "planner failure" and "blindspot" attacks. Planner failure attacks make changes to the physical environment so as to make planners fail to find a solution. Blindspot attacks exploit occlusions and sensor field-of-view to make planners return a trajectory which is thought to be collision-free, but is actually in collision with unperceived parts of the environment. Our experimental results show that successful attacks need only to make subtle changes to the real world, in order to obtain a drastic increase in failure rates and collision rates - leading the planner to fail 95% of the time and collide 90% of the time in problems generated with an existing planner benchmark tool. We also analyze the transferability of attacks to different planners, and discuss underlying assumptions and future research directions. Overall, the paper shows that physical adversarial attacks on motion planning algorithms pose a serious threat to robotics, which should be taken into account in future research and development.
- N. W. Alharthi and M. Brandao, “Physical and Digital Adversarial Attacks on Grasp Quality Networks,” in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024.
[Abstract]
[Code]
[PDF]
Grasp Quality Networks are important components of grasping-capable autonomous robots, as they allow them to evaluate grasp candidates and select the one with highest chance of success. The widespread use of pick-and-place robots and Grasp Quality Networks raises the question of whether such systems are vulnerable to adversarial attacks, as that could lead to large economic damage. In this paper we propose two kinds of attacks on Grasp Quality Networks, one assuming physical access to the workspace (to place or attach a new object) and another assuming digital access to the camera software (to inject a pixel-intensity change on a single pixel). We then use evolutionary optimization to obtain attacks that simultaneously minimize the noticeability of the attacks and the chance that selected grasps are successful. Our experiments show that both kinds of attack lead to drastic drops in algorithm performance, thus making them important attacks to consider in the cybersecurity of grasping robots.
- Q. Liu and M. Brandao, “Generating Environment-based Explanations of Motion Planner Failure: Evolutionary and Joint-Optimization Algorithms,” in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024.
[Abstract]
[PDF]
Motion planning algorithms are important components of autonomous robots, which are difficult to understand and debug when they fail to find a solution to a problem. In this paper we propose a solution to the failure-explanation problem, which are automatically-generated environment-based explanations. These explanations reveal the objects in the environment that are responsible for the failure, and how their location in the world should change so as to make the planning problem feasible. Concretely, we propose two methods - one based on evolutionary optimization and another on joint trajectory-and-environment continuous-optimization. We show that the evolutionary method is well-suited to explain sampling-based motion planners, or even optimization-based motion planners in situations where computation speed is not a concern (e.g. post-hoc debugging). However, the optimization-based method is 4000 times faster and thus more attractive for interactive applications, even though at the cost of a slightly lower success rate. We demonstrate the capabilities of the methods through concrete examples and quantitative evaluation.
2023
- K. Alsheeb and M. Brandao, “Towards Explainable Road Navigation Systems,” in IEEE International Conference on Intelligent Transportation Systems (ITSC), 2023.
[Abstract]
[Code]
[PDF]
Road navigation systems are important systems for pedestrians, drivers, and autonomous vehicles. Routes provided by such systems can be unintuitive, and may not contribute to an improvement of users’ mental models of maps and traffic. Automatically-generated explanations have the potential to solve these problems. Towards this goal, in this paper we propose algorithms for the generation of explanations for routes, based on properties of the road networks and traffic. We use a combination of inverse optimization and diverse shortest path algorithms to provide optimal explanations to questions of the type "why is path A fastest, rather than path B (which the user provides)?", and "why does the fastest path not go through waypoint W (which the user provides)?". The explanations reveal properties of the map - such as speed limits, congestion and road closure - that are not compatible with users’ expectations, and the knowledge of which would make users prefer the system’s path. We demonstrate the explanation algorithms on real map and traffic data, and conduct an evaluation of the properties of the algorithms.
- Z. Zhou and M. Brandao, “Noise and Environmental Justice in Drone Fleet Delivery Paths: A Simulation-Based Audit and Algorithm for Fairer Impact Distribution,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023.
[Abstract]
[Code]
[PDF]
Despite the growing interest in the use of drone fleets for delivery of food and parcels, the negative impact of such technology is still poorly understood. In this paper we investigate the impact of such fleets in terms of noise pollution and environmental justice. We use simulation with real population data to analyze the spatial distribution of noise, and find that: 1) noise increases rapidly with fleet size; and 2) drone fleets can produce noise hotspots that extend far beyond warehouses or charging stations, at levels that lead to annoyance and interference of human activities. This, we will show, leads to concerns of fairness of noise distribution. We then propose an algorithm that successfully balances the spatial distribution of noise across the city, and discuss the limitations of such purely technical approaches. We complement the work with a discussion of environmental justice, showing how careless UAV fleet development and regulation can lead to reinforcing well-being deficiencies of poor and marginalized communities.
- M. E. Akintunde, M. Brandao, G. Jahangirova, H. Menendez, M. R. Mousavi, and J. Zhang, “On Testing Ethical Autonomous Decision-Making,” in Springer LNCS Festschrift dedicated to Jan Peleska’s 65th Birthday, 2023.
2022
- R. Eifler, M. Brandao, A. Coles, J. Frank, and J. Hoffman, “Evaluating Plan-Property Dependencies: A Web-Based Platform and User Study,” in Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS), 2022.
[Abstract]
[DOI]
[PDF]
The trade-offs between different desirable plan properties - e.g. PDDL temporal plan preferences - are often difficult to understand. Recent work addresses this by iterative planning with explanations elucidating the dependencies between such plan properties. Users can ask questions of the form ’Why does the plan not satisfy property p?’, which are answered by ’Because then we would have to forego q’. It has been shown that such dependencies can be computed reasonably efficiently. But is this form of explanation actually useful for users? We run a large crowd-worker user study (N = 100 in each of 3 domains) evaluating that question. To enable such a study in the first place, we contribute a Web-based platform for iterative planning with explanations, running in standard browsers. Comparing users with vs. without access to the explanations, we find that the explanations enable users to identify better trade-offs between the plan properties, indicating an improved understanding of the planning task.
- M. Brandao and Y. Setiawan, “’Why Not This MAPF Plan Instead?’ Contrastive Map-based Explanations for Optimal MAPF,” in ICAPS 2022 Workshop on Explainable AI Planning (XAIP), 2022.
[Abstract]
[Code]
[PDF]
Multi-Agent Path Finding (MAPF) plans can be very complex to analyze and understand. Recent user studies have shown that explanations would be a welcome tool for MAPF practitioners and developers to better understand plans, as well as to tune map layouts and cost functions. In this paper we formulate two variants of an explanation problem in MAPF that we call contrastive "map-based explanation". The problem consists of answering the question "why don’t agents A follow paths P’ instead?"—by finding regions of the map that would have to be an obstacle in order for the expected plan to be optimal. We propose three different methods to compute these explanations, and evaluate them quantitatively on a set of benchmark problems that we make publicly available. Motivations for generating this type of explanation are discussed in the paper and include both user understanding of MAPF problems, and designer-aids to guide the improvement of map layouts.
- M. Brandao, M. Mansouri, and M. Magnusson, “Editorial: Responsible Robotics,” Frontiers in Robotics and AI, vol. 9, Jun. 2022.
[DOI]
- M. Brandao, M. Mansouri, A. Mohammed, P. Luff, and A. Coles, “Explainability in Multi-Agent Path/Motion Planning: User-study-driven Taxonomy and Requirements,” in International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2022, pp. 172–180.
[Abstract]
[PDF]
Multi-Agent Path Finding (MAPF) and Multi-Robot Motion Planning (MRMP) are complex problems to solve, analyze and build algorithms for. Automatically-generated explanations of algorithm output, by improving human understanding of the underlying problems and algorithms, could thus lead to better user experience, developer knowledge, and MAPF/MRMP algorithm designs. Explanations are contextual, however, and thus developers need a good understanding of the questions that can be asked about algorithm output, the kinds of explanations that exist, and the potential users and uses of explanations in MAPF/MRMP applications. In this paper we provide a first step towards establishing a taxonomy of explanations, and a list of requirements for the development of explainable MAPF/MRMP planners. We use interviews and a questionnaire with expert developers and industry practitioners to identify the kinds of questions, explanations, users, uses, and requirements of explanations that should be considered in the design of such explainable planners. Our insights cover a diverse set of applications: warehouse automation, computer games, and mining.
2021
- M. Brandao, A. Coles, and D. Magazzeni, “Explaining Path Plan Optimality: Fast Explanation Methods for Navigation Meshes Using Full and Incremental Inverse Optimization,” in Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS), 2021, pp. 56–64.
[Abstract]
[Code]
[DOI]
[PDF]
Path planners are important components of various products from video games to robotics, but their output can be counter-intuitive due to problem complexity. As a step towards improving the understanding of path plans by various users, here we propose methods that generate explanations for the optimality of paths. Given the question "why is path A optimal, rather than B which I expected?", our methods generate an explanation based on the changes to the graph that make B the optimal path. We focus on the case of path planning on navigation meshes, which are heavily used in the computer game industry and robotics. We propose two methods - one based on a single inverse-shortest-paths optimization problem, the other incrementally solving complex optimization problems. We show that these methods offer computation time improvements of up to 3 orders of magnitude relative to domain-independent search-based methods, as well as scaling better with the length of explanations. Finally, we show through a user study that, when compared to baseline cost-based explanations, our explanations are more satisfactory and effective at increasing users’ understanding of problems.
- M. Brandao, “Normative roboticists: the visions and values of technical robotics papers,” in IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2021, pp. 671–677.
[Abstract]
[DOI]
[PDF]
Visions have an important role in guiding and legitimizing technical research, as well as contributing to expectations of the general public towards technologies. In this paper we analyze technical robotics papers published between 1998 and 2019 to identify themes, trends and issues with the visions and values promoted by robotics research. In particular, we identify the themes of robotics visions and implicitly normative visions; and we quantify the relative presence of a variety of values and applications within technical papers. We conclude with a discussion of the language of robotics visions, marginalized visions and values, and possible paths forward for the robotics community to better align practice with societal interest. We also discuss implications and future work suggestions for Responsible Robotics and HRI research.
- M. Brandao, G. Canal, S. Krivic, P. Luff, and A. Coles, “How experts explain motion planner output: a preliminary user-study to inform the design of explainable planners,” in IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2021, pp. 299–306.
[Abstract]
[DOI]
[PDF]
Motion planning is a hard problem that can often overwhelm both users and designers: due to the difficulty in understanding the optimality of a solution, or reasons for a planner to fail to find any solution. Inspired by recent work in machine learning and task planning, in this paper we are guided by a vision of developing motion planners that can provide reasons for their output - thus potentially contributing to better user interfaces, debugging tools, and algorithm trustworthiness. Towards this end, we propose a preliminary taxonomy and a set of important considerations for the design of explainable motion planners, based on the analysis of a comprehensive user study of motion planning experts. We identify the kinds of things that need to be explained by motion planners ("explanation objects"), types of explanation, and several procedures required to arrive at explanations. We also elaborate on a set of qualifications and design considerations that should be taken into account when designing explainable methods. These insights contribute to bringing the vision of explainable motion planners closer to reality, and can serve as a resource for researchers and developers interested in designing such technology.
- R. Eifler, M. Brandao, A. Coles, J. Frank, and J. Hoffman, “Plan-Property Dependencies are Useful: A User Study,” in ICAPS 2021 Workshop on Explainable AI Planning (XAIP), 2021.
[Abstract]
[PDF]
The trade-offs between different desirable plan properties - e.g. PDDL temporal plan preferences - are often difficult to understand. Recent work proposes to address this by iterative planning with explanations elucidating the dependencies between such plan properties. Users can ask questions of the form ’Why does the plan you suggest not satisfy property p?’, which are answered by ’Because then we would have to forego q’ where not-q is entailed by p in plan space. It has been shown that such plan-property dependencies can be computed reasonably efficiently. But is this form of explanation actually useful for users? We contribute a user study evaluating that question. We design use cases from three domains and run a large user study (N = 40 for each domain, ca. 40 minutes work time per user and domain) on the internet platform Prolific. Comparing users with vs. without access to the explanations, we find that the explanations tend to enable users to identify better trade-offs between the plan properties, indicating an improved understanding of the task.
- M. Brandao, G. Canal, S. Krivic, and D. Magazzeni, “Towards providing explanations for robot motion planning,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 3927–3933.
[Abstract]
[DOI]
[PDF]
Recent research in AI ethics has put forth explainability as an essential principle for AI algorithms. However, it is still unclear how this is to be implemented in practice for specific classes of algorithms - such as motion planners. In this paper we unpack the concept of explanation in the context of motion planning, introducing a new taxonomy of kinds and purposes of explanations in this context. We focus not only on explanations of failure (previously addressed in motion planning literature) but also on contrastive explanations - which explain why a trajectory A was returned by a planner, instead of a different trajectory B expected by the user. We develop two explainable motion planners, one based on optimization, the other on sampling, which are capable of answering failure and constrastive questions. We use simulation experiments and a user study to motivate a technical and social research agenda.
- M. Brandao, “Socially Fair Coverage: The Fairness Problem in Coverage Planning and a New Anytime-Fair Method,” in 2021 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO), 2021, pp. 227–233.
[Abstract]
[DOI]
[PDF]
In this paper we investigate and characterize social fairness in the context of coverage path planning. Inspired by recent work on the fairness of goal-directed planning, and work characterizing the disparate impact of various AI algorithms, here we simulate the deployment of coverage robots to anticipate issues of fairness. We show that classical coverage algorithms, especially those that try to minimize average waiting times, will have biases related to the spatial segregation of social groups. We discuss implications in the context of disaster response, and provide a new coverage planning algorithm that minimizes cumulative unfairness at all points in time. We show that our algorithm is 200 times faster to compute than existing evolutionary algorithms - while obtaining overall-faster coverage and a fair response in terms of waiting-time and coverage-pace differences across multiple social groups.
- J. Grzelak and M. Brandao, “The Dangers of Drowsiness Detection: Differential Performance, Downstream Impact, and Misuses,” in AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2021.
[Abstract]
[DOI]
[PDF]
Drowsiness and fatigue are important factors in driving safety and work performance. This has motivated academic research into detecting drowsiness, and sparked interest in the deployment of related products in the insurance and work-productivity sectors. In this paper we elaborate on the potential dangers of using such algorithms. We first report on an audit of performance bias across subject gender and ethnicity, identifying which groups would be disparately harmed by the deployment of a state-of-the-art drowsiness detection algorithm. We discuss some of the sources of the bias, such as the lack of robustness of facial analysis algorithms to face occlusions, facial hair, or skin tone. We then identify potential downstream harms of this performance bias, as well as potential misuses of drowsiness detection technology - focusing on driving safety and experience, insurance cream-skimming and coverage-avoidance, worker surveillance, and job precarity.
2020
- M. Brandao and D. Magazzeni, “Explaining plans at scale: scalable path planning explanations in navigation meshes using inverse optimization,” in IJCAI 2020 Workshop on Explainable Artificial Intelligence (XAI), 2020.
[Abstract]
[PDF]
In this paper we propose methods that provide explanations for path plans, in particular those that answer questions of the type "why is path A optimal, rather than path B which I expected?". In line with other work in eXplainable AI Planning (XAIP), such explanations could help users better understand the outputs of path planning methods, as well as help debug or iterate the design of planners and maps. By specializing the explanation methods to path planning, using optimization-based inverse-shortest-paths formulations, we obtain drastic computation time improvements relative to general XAIP methods, especially as the length of the explanations increases. One of the claims of this paper is that such specialization might be required for explanation methods to scale and therefore come closer to real-world usability. We propose and evaluate the methods on large-scale navigation meshes, which are representations for path planning heavily used in the computer game industry and robotics.
- M. Brandao, “Fair navigation planning: a humanitarian robot use case,” in KDD 2020 Workshop on Humanitarian Mapping, 2020.
[Abstract]
[arXiv]
[PDF]
In this paper we investigate potential issues of fairness related to the motion of mobile robots. We focus on the particular use case of humanitarian mapping and disaster response. We start by showing that there is a fairness dimension to robot navigation, and use a walkthrough example to bring out design choices and issues that arise during the development of a fair system. We discuss indirect discrimination, fairness-efficiency trade-offs, the existence of counter-productive fairness definitions, privacy and other issues. Finally, we conclude with a discussion of the potential of our methodology as a concrete responsible innovation tool for eliciting ethical issues in the design of autonomous systems.
- M. Brandao, “Discrimination issues in usage-based insurance for traditional and autonomous vehicles,” in Culturally Sustainable Robotics—Proceedings of Robophilosophy 2020, 2020, vol. 335, pp. 395–406.
[Abstract]
[DOI]
[PDF]
Vehicle insurance companies have started to offer usage-based policies which track users to estimate premiums. In this paper we argue that usage-based vehicle insurance can lead to indirect discrimination of sensitive personal characteristics of users, have a negative impact in multiple personal freedoms, and contribute to reinforcing existing socio-economic inequalities. We argue that there is an incentive for autonomous vehicles (AVs) to use similar insurance policies, and anticipate new sources of indirect and structural discrimination. We conclude by analyzing the advantages and disadvantages of alternative insurance policies for AVs: no-fault compensation schemes, technical explainability and fairness, and national funds.
- M. Brandao, M. Jirotka, H. Webb, and P. Luff, “Fair navigation planning: a resource for characterizing and designing fairness in mobile robots,” Artificial Intelligence (AIJ), vol. 282, 2020.
[Abstract]
[DOI]
[PDF]
In recent years, the development and deployment of autonomous systems such as mobile robots have been increasingly common. Investigating and implementing ethical considerations such as fairness in autonomous systems is an important problem that is receiving increased attention, both because of recent findings of their potential undesired impacts and a related surge in ethical principles and guidelines. In this paper we take a new approach to considering fairness in the design of autonomous systems: we examine fairness by obtaining formal definitions, applying them to a system, and simulating system deployment in order to anticipate challenges. We undertake this analysis in the context of the particular technical problem of robot navigation. We start by showing that there is a fairness dimension to robot navigation, and we then collect and translate several formal definitions of distributive justice into the navigation planning domain. We use a walkthrough example of a rescue robot to bring out design choices and issues that arise during the development of a fair system. We discuss indirect discrimination, fairness-efficiency trade-offs, the existence of counter-productive fairness definitions, privacy and other issues. Finally, we elaborate on important aspects of a research agenda and reflect on the adequacy of our methodology in this paper as a general approach to responsible innovation in autonomous systems.
2019
- M. Brandao, “Age and gender bias in pedestrian detection algorithms,” in Workshop on Fairness Accountability Transparency and Ethics in Computer Vision, CVPR, 2019.
[Abstract]
[Dataset]
[arXiv]
[PDF]
In this paper we evaluate the age and gender bias in state-of-the-art pedestrian detection algorithms. These algorithms are used by mobile robots such as autonomous vehicles for locomotion planning and control. Therefore, performance disparities could lead to disparate impact in the form of biased crash outcomes. Our analysis is based on the INRIA Person Dataset extended with child, adult, male and female labels. We show that all of the 24 top-performing methods of the Caltech Pedestrian Detection Benchmark have higher miss rates on children. The difference is significant and we analyse how it varies with the classifier, features and training data used by the methods. Algorithms were also gender-biased on average but the performance differences were not significant. We discuss the source of the bias, the ethical implications, possible technical solutions and barriers to "solving" the issue.
2018
- M. Brandao, “Moral Autonomy and Equality of Opportunity for Algorithms in Autonomous Vehicles,” in Envisioning Robots in Society: Power, Politics, and Public Space—Proceedings of Robophilosophy 2018, 2018, vol. 311, pp. 302–310.
[Abstract]
[DOI]
[PDF]
This paper addresses two issues with the development of ethical algorithms for autonomous vehicles. One is that of uncertainty in the choice of ethical theories and utility functions. Using notions of moral diversity, normative uncertainty, and autonomy, we argue that each vehicle user should be allowed to choose the ethical views by which the vehicle should act. We then deal with the issue of indirect discrimination in ethical algorithms. Here we argue that equality of opportunity is a helpful concept, which could be applied as an algorithm constraint to avoid discrimination on protected characteristics.