King's College London logo

Distributed Artificial Intelligence Group Seminars

View the DAI group research webpage https://www.kcl.ac.uk/research/dai

If you would like to receive regular emails about our seminars, or for any questions or feedback, please contact Stephen Asiedu.

The Faculty of Natural & Mathematical Sciences at King’s College London has a Code of Conduct which we expect participants at our events to abide by. This is intended to ensure an inclusive and productive environment and can be read here.

Past Seminars

Speaker Sriram Bharadwaj Rangarajan (King's College London)
Topic Financial Market Mechanism Design using Incentive Aware Agent Based Models
Date, Time 07.04.2025, 14:Apr - 15:Apr

Financial markets are among the most complex real-world systems to model, owing to the intricate strategic interactions between participants and the surrounding market environment, coupled with limited transparency into individual agent behaviours. A central challenge in this domain is to model and analyse how market participants adapt their strategies in response to different trading mechanisms and regulatory interventions. The overarching objective is to design financial markets that are resistant to malicious behaviour. Agent-based models (ABMs) offer a powerful framework for simulating these heterogeneous, strategic interactions. In this talk, we present an incentive-aware ABM framework that integrates empirical game-theoretic analysis (EGTA) to investigate the strategic responses of agents to market design choices and regulatory policies. We will discuss two key applications of this framework that we have been working on. First, we introduce a flash crash circuit breaker design using our proposed framework that leverages adversarial conditions to evaluate and mitigate systemic risks. Second, we present a hybrid Frequent Call Market (FCM)–Continuous Double Auction (CDA) mechanism that can potentially transition between periodic and continuous trading based on market conditions, thereby improving liquidity and price efficiency. These applications demonstrate the potential of ABM + EGTA as a practical instrument for designing more efficient, and manipulation-resistant financial markets.


Speaker Zhuohan Wang
Topic A Financial Time Series Denoiser Based on Diffusion Models
Date, Time 24.03.2025, 14:Mar - 15:Mar

Financial time series often exhibit low signal-to-noise ratio, posing significant challenges for accurate data interpretation and prediction and ultimately decision making. Generative models have gained attention as powerful tools for simulating and predicting intricate data patterns, with diffusion models emerging as particularly effective methods. This paper introduces a novel approach utilizing a diffusion model as a denoiser for financial time series in order to improve data predictability and trading performance. By leveraging the forward and reverse processes of a conditional diffusion model to add and remove noise progressively, we reconstruct original data from noisy inputs. Our extensive experiments demonstrate that diffusion model-based denoised time series significantly enhance the performance on downstream future return classification tasks. Moreover, trading signals derived from the denoised data yield more profitable trades with fewer transactions, thereby minimizing transaction costs and increasing overall trading efficiency. Finally, we show that by using classifiers trained on denoised time series, we can recognize how noisy the market is and obtain excess returns.


Speaker Ziyan Wang
Topic Policy Learning from Tutorial Books via Understanding, Rehearsing and Introspecting
Date, Time 10.02.2025, 14:Feb - 15:Feb

When humans need to learn a new skill, we can acquire knowledge through written books, including textbooks, tutorials, etc. However, current research for decision-making, like reinforcement learning (RL), has primarily required numerous real interactions with the target environment to learn a skill, while failing to utilize the existing knowledge already summarized in the text. The success of Large Language Models (LLMs) sheds light on utilizing such knowledge behind the books. In this paper, we discuss a new policy learning problem called Policy Learning from tutorial Books (PLfB) upon the shoulders of LLMs’ systems, which aims to leverage rich resources such as tutorial books to derive a policy network. Inspired by how humans learn from books, we solve the problem via a three-stage framework: Understanding, Rehearsing, and Introspecting (URI). In particular, it first rehearses decision-making trajectories based on the derived knowledge after understanding the books, then introspects about the imaginary dataset to distill a policy network. We build two benchmarks for PLfB based on Tic-Tac-Toe and Football games. In the experiment, URI’s policy achieves a minimum of 44% net winning rate against GPT-based agents without any real data. In the much more complex football game, URI’s policy beat the built-in AIs with a 37% winning rate while GPT-based agents can only achieve a 6% winning rate. The project page: plfb-football.github.io.


Speaker Colin Cleveland (King's College London)
Topic Optimal Candidate Positioning in Multi-Issue Elections
Date, Time 27.01.2025, 14:Jan - 15:Jan

We investigate voting scenarios (elections) in a multidimensional issue space, represented as R^d, where voters’ preferences are modelled by points in this space. Voters cast their vote for the candidate whose position is closest to their preferred point, according to a distance measure induced by a norm on R^d. Candidates strategically position themselves within this space to maximise an objective function, typically aiming to maximise the number of votes received or their ranking among the candidates.

We demonstrate that, for an arbitrary number of issues (the dimensionality of the issue space), determining the optimal position for a candidate is NP-hard even when competing against only one candidate. However, when the number of issues is bounded, we prove that this problem becomes polynomial-time solvable for several of its variants.
Furthermore, we provide initial approximation algorithms for both single-candidate and multi-candidate versions of our problem and analyse the applicability of our findings to other electoral models. We find that our model results extend naturally to general scoring systems, including k-approval and Borda voting.


Speaker Sebastian Homrighausen (Aarhus University)
Topic Estimating the Social Welfare and Cost of Random Serial Dictatorship
Date, Time 02.12.2024, 14:Dec - 15:Dec

In the widely known assignment problem, n agents are to be matched to n items. The algorithm receives a preference ranking of each agent over all items. We concern ourselves with two settings; first, we assume agents to have an underlying (positive) cardinal valuation for the items, and in the second setting agents have a metric cost function as basis underneath their preferences. Our focus will be on the Random Serial Dictatorship mechanism: After choosing a random permutation of the agent indices, we allocate the highest remaining item (according to their preferences) to agents in order of this permutation. The goal is to estimate the social welfare and social cost (in their respective settings) by using a (relatively) small sample size after having shown a hardness result that leaves us without option for an exact result.


Speaker Lingxiao Zhao (King's College London)
Topic Equilibria of Carbon Allowance Auctions: Emissions and Productivity
Date, Time 11.11.2024, 14:Nov - 15:Nov

The Emissions Trading System (ETS) is a market-oriented policy aimed at regulating and diminishing greenhouse gas emissions by allocating and trading carbon allowances. Previous studies have mainly focused on dynamic model simulations, while the overall equilibrium state of ETS systems has yet to be explored. To this end, this paper proposes an empirical agent-based model to analyse European carbon allowance auctions: Within the ETS framework, energy companies adopt different strategies to interact in the primary carbon auction market. We use two different methods: partial equilibrium analysis and role-symmetric game analysis to simplify the model strategy space. We then apply the alpha-rank algorithm to determine the model's equilibrium strategy and conduct an in-depth analysis of the combination of these strategies. We examine carbon output levels under these conditions and find that the ETS framework effectively reduces carbon emissions across the system. We also explore the impact of different simplification methods and auction formats on the ETS market: Our results indicate that role-symmetric game analysis has better payoff performance; in addition, uniform auctions improve production efficiency, while discriminatory auctions successfully allocate resources, leading to fairer market competition.


Speaker Glen Berman (Australian National University (ANU))
Topic Constructing the AI research field: studies of AI researchers and infrastructures
Date, Time 05.11.2024, 13:Nov - 14:Nov

Within the resource constrained science system, the AI research field stands out as a site of significant national funding, university and industry investment, and media interest. As such, legitimisation as an AI researcher brings material and symbolic rewards, and demarcation of the AI research field from other fields of scientific inquiry is an ongoing and highly contested project. In this talk I will draw on two studies, one recently completed and one just beginning, to reflect on the dynamics informing the AI field’s development and its underlying normative commitments. The first is a recently completed interview study (n = 90) of academics affiliated with AI-branded research organisations in the UK, US, and Australia, which Kate Williams (University of Melbourne), Eliel Cohen (KCL’s Policy Institute), and I undertook . The study draws on the sociology of expertise and studies of research infrastructures to develop three conceptual frames—vertical scaling, horizontal scaling, and dimensionality—to explain the processes through which a seemingly coherent AI research field is emerging (paper under review, but can be shared). The second study is a new project, which James Smithies (formerly King’s Digital Lab, now ANU) and I are undertaking, focused on reflexively prototyping the adoption of AI-based technologies in the humanities. I will draw on observations from the initial phases of this project to further substantiate and develop my conceptualisation of the AI field as a fluid space that—through vertical and horizontal scaling, and dimensionality—leverages the boundary zone between several overlapping field arrangements.

Bio:

Glen Berman is a final-year PhD student at the College of Engineering and Computer Science at Australian National University (ANU). Glen works at the intersection of Infrastructure Studies and Responsible Artificial Intelligence (AI), focusing on the construction of AI as a research field and practice. Through active collaborations with sociologists of expertise, computer scientists, and human-computer interaction researchers within and outside of academia, Glen contributes a sociological lens to the development of Responsible AI interventions. Glen’s research has been published in Big Data & Society and presented at the CHI Conference on Human Factors in Computing Systems and the AAAI/ACM Conference on AI, Ethics, and Society. Alongside his PhD research, Glen is also a Senior Research Officer on the AI as Infrastructure project at the HASS Digital Research Hub at ANU, and was previously a Student Researcher at Google Research’s Responsible AI and Human-Centered Technology research group. Prior to commencing his PhD, Glen completed a Masters in Applied Cybernetics at ANU. Before that, Glen helped lead technology-driven social change organisations in Australia and the United States.


Speaker Jordan Penn (King's College London)
Topic Optimal Partial Identification of Causal Effects with Mostly Invalid Instruments.
Date, Time 28.10.2024, 14:Oct - 15:Oct

Instrumental variables (IVs) are widely used to estimate causal effects in the presence of unobserved confounding between an "exposure" and an "outcome". An IV must affect the outcome exclusively through the exposure and be unconfounded with the outcome. I will present a framework for relaxing either or both of these strong assumptions with tuneable and interpretable budget constraints. I will present BudgetIV, an algorithm that returns a feasible set of causal effects that can be identified exactly given relevant covariance parameters. The feasible set may be disconnected but is a finite union of convex subsets. I will discuss conditions under which this set is sharp, i.e., contains all and only effects consistent with the background assumptions and the joint distribution of observable variables. The method applies to a wide class of semiparametric models, and simulated experiments demonstrate how its ability to select specific subsets of instruments confers an advantage over convex relaxations in both linear and nonlinear settings. For uncertainty quantification, this algorithm is adapted to form confidence sets that are asymptotically valid under a common statistical assumption from the population genetics and epidemiology literature.

These results are joint work between myself and Dr David Watson at KCL and Dr Lee Gunderson, Dr Gecia Bravo--Hersdorff and Prof. Ricardo Silva at UCL.


Speaker Desmond Chan (King's College London)
Topic Asymptotic Extinction in Large Coordination Games
Date, Time 07.10.2024, 14:Oct - 15:Oct

We study the exploration-exploitation tradeoff in multiplayer, normal form games under Q-Learning, a common learning framework for multi-agent reinforcement learning. Q-Learning is known to have two potential shortcomings: 1) non-convergence and 2) equilibrium selection problem, where there are multiple equilibria and which equilibria learning agents end up in are dependent on initial conditions.
In this talk, we will study the typical behaviours that arises from Q-Learning over normal form games in the large actions and many player limit. Payoff matrices randomly generated, players are initialised random strategise and the emerging learning dynamical behaviour is studied. In the large action and many player limit, we show the critical exploration rate required for convergence to a unique fixed point in a zero-sum game tends to half that of an identical payoff potential game. Alongside this, we provide a structural result that the unique fixed point of Q-Learning tends to the boundary of the simplex of the action space in coordination games as the numbers of action increases, a phenomenon we term asymptotic extinction, where a constant fraction of the actions are played with zero probability at a rate o(1/N) for an N-action game.


Speaker Liang Zheng (Australian National University (ANU))
Topic The many meanings with image pairs
Date, Time 19.09.2024, 14:Sep - 17:Sep

Training AI models with image pairs has been studied for a long time and proven very useful. In this talk, I will first revisit popular practices of using data pairs in various computer vision tasks: from face recognition, person re-identification, to contrastive learning in foundation models. I will then discuss human preference data: between a pair of images, people may generally prefer one over the other. This type of data pair can be used to align diffusion models with human preference, so that diffusion models are more likely to generate images that people like. I will describe how we address this problem by aligning human preference at different denoising steps. This method effectively improves stable diffusion (SD) and SDXL models while accelerating the fine-tuning process by 10 times compared with existing methods.


Speaker Jinyun Tong (King's College London)
Topic Application of Empirical Game-Theoretic Analysis in Systemic Risk
Date, Time 10.06.2024, 12:Jun - 13:Jun

Empirical Game-Theoretic Analysis (EGTA) is an approach that induces empirical games using simulation results augmented by expert modelling, allowing game theory to be employed in complicated scenarios. In this talk, I will first introduce the application of EGTA in systemic risk within financial networks, especially interbank rescues. Then, I will introduce the basic methodology of EGTA and justify our choice of the equilibrium solver applied in our research, the α-rank algorithm, based on the discussion of its theoretical support, the stochastic evolutionary process.


Speaker Georgios Piliouras (Google DeepMind, Singapore University of Technology and Design)
Topic Chaos and Learning in Games
Date, Time 05.06.2024, 12:Jun - 13:Jun

Multi-agent learning in games is a fundamentally challenging domain that is typically studied in a two step process. First, we prove convergence to a particular class of game theoretic equilibria such as Nash equilibria, correlated equilibria or variations thereof and then we use the properties of these equilibria to understand system performance. In this talk, we will be examining rich multi-agent learning scenarios where such approaches do not work and whose behavior is formally chaotic but nevertheless precise understanding of their behavior can be established.


Speaker Kousha Etessami (University of Edinburgh)
Topic The complexity of computing a Tarski fixed point of a monotone function, with applications to games and equilibria
Date, Time 24.05.2024, 13:May - 14:May

The task of computing a fixed point of a monotone function arises in a large variety of applications. In this talk we shall study the computational complexity of computing a (any) fixed point of a given discrete function, f:[N]^d --> [N]^d, mapping the finite d-dimensional Euclidean grid lattice with sides of length N=2^n to itself, such that the function f is monotone with respect to the standard coordinate-wise
partial order on vectors in [N]^d. By Tarski's Theorem, such a monotone function always has a fixed point, and indeed has a non-empty lattice of fixed points.

In the "black box" model, the monotone function is assumed to be given by an oracle that we can query at any point in the domain [N]^d, and the aim is to find a (any) fixed point with a minimum number of queries. In the "white box" model, the function is given succinctly by a Boolean circuit with d*n input gates and d*n output gates, and we call the total search problem of either computing a (any) fixed point for such a succinctly presented function, or else computing a pair of points that witness its non-monotonicity, the TARSKI problem.

It turns out that the TARSKI problem subsumes a variety of important computational problems, including prominent equilibrium computation problems in game theory and economics whose complexity status is not yet fully understood. These include among them computing/approximating the
value of Condon's or Shapley's stochastic games, and computing a pure Nash equilibrium for (succinctly presented) super-modular games and games with strategic complementarities.

We show that TARSKI is contained in both the total search complexity classes PLS and PPAD. In the black box model, it was known that finding a
fixed can be done in (log N)^d queries, and we show that it requires at least (log N)^2 queries already on the 2-dimensional grid, even for randomized algorithms.

We conclude by discussing the current complexity status of the TARSKI problem, based on some more recent results, both in the white-box model
and black-box oracle model of computation.

(This talk describes joint work with C. Papadimitriou, A. Rubinstein, and
M. Yannakakis, that appeared in ITCS'2020.)


Speaker Dylan Cope (KCL)
Topic Learning Translations: Emergent Communication Pretraining for Cooperative Language Acquisition
Date, Time 13.05.2024, 12:May - 13:May

In the field of Emergent Communication (EC), agents are trained to communicate with one another in order to cooperate on a shared task. The typical outcome of this approach is that the learned communicative conventions are highly specialised to the training community of agents, and thereby they are brittle to changes in the composition of the community. This observation led to research into Zero-Shot Coordination (ZSC) for learning communication strategies that are robust to agents not encountered during training. However, ZSC typically assumes that no prior information is available about the agents that will be encountered in the zero-shot setting. In many cases, this presents an unnecessarily hard problem and rules out communication via preestablished conventions. For instance, humans rely heavily on the use of natural language to cooperate with newly encountered people.
In this work, we proposed a novel class of challenges called a Cooperative Language Acquisition Problems (CLAPs) in which the ZSC assumptions are relaxed by allowing a ‘joiner' agent to learn from a dataset of interactions between agents in a target community. We introduced and compared two methods for solving CLAPs: Behaviour Cloning (BC), and Emergent Communication pretraining and Translation Learning (ECTL), in which an agent is trained in self-play with EC and then learns to translate between an emergent protocol and the target community's protocol.


Speaker Stefano Albrecht (Autonomous Agents Research Group - University of Edinburgh)
Topic From Deep Reinforcement Learning to LLM-based Agents: Perspectives on Current Research
Date, Time 30.04.2024, 11:Apr - 12:Apr

Since the recent successes of large language models (LLMs), we are beginning to see a shift of attention from deep reinforcement learning to LLM-based agents. While deep RL policies are typically learned from scratch to maximise some defined return objective, LLM-agents use an existing LLM at their core and focus on clever prompt engineering and downstream specialisation of the LLM via supervised and reinforcement learning techniques. In this talk, I will first provide a broad overview of my group’s research in deep RL, which focuses among other topics on developing sample-efficient and robust RL algorithms for both single- and multi-agent control tasks, including industry applications in autonomous driving and multi-robot warehouses. I will then present our recent research into LLM-agents, where we propose an approach for household robotics that takes into account user preferences to achieve more robust and effective planning. I will conclude with some personal observations about the state of LLM-agent research: (a) many papers in this field follow essentially the same recipe by focussing on prompt engineering and downstream specialisation; (b) this recipe makes their scientific claims brittle as they depend crucially on the specific LMM engine, and (c) LLMs are not natively designed to maximise objectives for optimal control and decision making. Based on these observations, I believe some fruitful research avenues can be identified.


Speaker Alexander Skopalik (University of Twente)
Topic On Two-Stage Facility Location Games
Date, Time 29.04.2024, 12:Apr - 13:Apr

In this talk, we discuss several variants of two stage facility location games with two type of strategic agents, facilities and clients. In the first stage, each facility agents chooses a location on which to offer services. In the second stage, client agents choose which facility to patronise while their utilities not only depends on the facilities location but on the choice of other clients as well, e.g. due to congestion effects.

We study questions such as the existence, uniqueness, computational complexity, and efficiency of equilibria in these games. We give an overview and compare three natural models from three recent works, which differ in the behaviour of the client agents.

This is based on joint work with Simon Krogmann, Louise Molitor, Pascal Lenzner, Marc Uetz, and Marnix Vos (IJCAI 2021, AAAI 2023, IJCAI 2024)


Speaker Davide Ferrari (King's College London - Population Health and Environmental Sciences)
Topic Genetic Programming and Symbolic Regression for Human-in-the-loop Machine Learning in high-stakes domains ​
Date, Time 15.04.2024, 12:Apr - 13:Apr

Recent advancements in large, one-size-fits-all deep learning models offer promise, but they can't replace the need for tailored solutions in high stakes domains. This presentation advocates for a more nuanced approach: human-in-the-loop methodologies. Here, clinical expertise is integrated from the design stage. We propose using appropriate algorithms that go beyond simply defining the prediction task, allowing for continuous human input throughout model development. This approach hinges on a novel implementation of Multi-objective Symbolic Regression, an evolutionary machine learning technique. It empowers users to intricately guide how the model evolves, fostering highly customized solutions. We will showcase real-world healthcare applications to demonstrate this approach's effectiveness.


Speaker Evangelos Pournaras (School of Computing, University of Leeds, UK)
Topic Collective privacy recovery: Data-sharing coordination via decentralized artificial intelligence
Date, Time 18.03.2024, 12:Mar - 13:Mar

Collective privacy loss becomes a colossal problem an emergency for personal freedoms and democracy. But are we prepared to handle personal data as scarce resource and collectively share data under the doctrine: as little as possible as much as necessary? We hypothesize a significant privacy recovery if a population of individuals the data collective coordinates to share minimum data for running online services with the required quality. Here we show how to automate and scale-up complex collective arrangements for privacy recovery using decentralized artificial intelligence. For this we compare for the first time attitudinal intrinsic rewarded and coordinated data sharing in a rigorous living-lab experiment of high realism involving real data disclosures. Using causal inference and cluster analysis we differentiate criteria predicting privacy and five key data-sharing behaviors. Strikingly data-sharing coordination proves to be a win–win for all: remarkable privacy recovery for people with evident costs reduction for service providers.

Link: https://doi.org/10.1093/pnasnexus/pgae029


Speaker Belal Asad (University of Southampton)
Topic Charting the Course Through IoT Security: Addressing DDoS Attacks with the Aid of AI
Date, Time 04.03.2024, 12:Mar - 13:Mar

In this presentation, we shall explore the intricate landscape of Internet of Things (IoT) security, focusing particularly on the increase and consequences of Distributed Denial of Service (DDoS) attacks within IoT infrastructures. Our objective is to elucidate the complex challenges these threats pose to IoT ecosystems and highlight the critical role artificial intelligence (AI) plays in detecting, mitigating, and preventing such security risks.


Speaker Drew Springham (King's College London - UKRI Safe and Trusted AI CDT)
Topic Fair social choice with incomplete information
Date, Time 19.02.2024, 12:Feb - 13:Feb

Multiwinner elections describes a form of election in which multiple candidates form the outcome instead of a single candidate. Such elections are relevant when electing a committee or a parliament. We study multiwinner voting in the scenario where we do not have full information about all voters’ preferences. This could be useful in the scenario where eliciting the preferences of a whole electorate is infeasible for example.

We define the notion of approximate representation for forms of justified representation and provide bounds on the sample size of a population required to provide approximate representation.

Finally, we simulate such a sampling process on a real-world election instance and provide evidence for the practicality of such a process.


Speaker Richard Willis (King's College London - UKRI Safe and Trusted AI CDT)
Topic Aligning incentives using reward transfers
Date, Time 05.02.2024, 12:Feb - 13:Feb

Multi-agent cooperation is particularly challenging in mixed-motive situations, where it does not pay to be nice to others. Consequently, self-interested agents often avoid collective behaviour, resulting in suboptimal outcomes for the group. My work introduces a metric to quantify the disparity between what is rational for individual agents and what is rational for the collective, by measuring the maximum amount of self-interest that individuals can maintain while engaging in prosocial behaviours. This is achieved by committing to transfer a proportion of future rewards to co-players, which helps to align the individual and group incentives. I illustrate this method on several novel games representations of social dilemmas with arbitrary numbers of players.


Speaker Martin Hoefer (Goethe-Universität Frankfurt am Main)
Topic Approximation Algorithms for Nash Social Welfare
Date, Time 13.12.2018, 16:Dec - 17:Dec

When allocating a set of items to agents, an interesting objective is
Nash social welfare (NSW). Here one aims to maximize the product of the
individual valuations. NSW can be seen as a trade-off between
maximization of the sum of valuations (social welfare) and the minimum
valuation (egalitarian welfare). Moreover, NSW satisfies an invariance
to scaling of each valuation function by a constant factor.

If goods are divisible, NSW-optimal allocations can often be efficiently
computed using machinery from convex programming. In contrast,
surprisingly little is known about algorithms to compute near-optimal
allocations for indivisible goods.

In this talk, I will review the state of the art and present our recent
advances for indivisble goods and agents with subclasses of submodular
valuations. Our main result is a 1.445-approximation algorithm. In
addition, the computed allocations satisfy approximate envy-freeness
conditions.

(based on joint work with Bhaskar Ray Chaudhuri, Yun Kuen Cheung, Naveen
Garg, Jugal Garg, Kurt Mehlhorn)
=============================
Room directions: From main Strand reception, go straight ahead then exit through the door on your left. Cross the courtyard into the building in front of you (North Wing), carry on straight along the corridor, B4 is on your left.


Speaker Victor Naroditskiy (OneMarketData)
Topic Market Manipulation Detection: Research Agenda
Date, Time 29.10.2018, 15:Oct - 16:Oct

Market manipulation refers to bidding strategies aimed at influencing the price in financial markets like NYSE, NASDAQ, and LSE. Recent regulation including MiFID II and Dodd Frank Act made market manipulation illegal and resulted in billions of fines to trading firms and brokers. Detecting market manipulation has become a priority for regulators, markets, and market participants alike.
Market manipulation detection is an appealing research direction for three reasons. Firstly, financial markets are a rare example of real-world mechanisms that are simple from the modeling point of view: they are continuous double auctions. Secondly, there is a lot of data that can be made available. "Market data" is a collection of all bids submitted to the auctions and of all the trades that resulted. The data is structured and simple and there are billions of data points available each day. Thirdly, market manipulation is real: that is, any findings that help characterize and detect market manipulation may make a difference in the real world.
I will outline research directions and show examples of data that is available.


Speaker Carmine Ventre (University of Essex)
Topic Demystifying Obvious Strategyproofness
Date, Time 22.10.2018, 15:Oct - 16:Oct

The presentation will focus on some of the recent work of the speaker,
which aims at building the theoretical foundations for a more applied
use of Algorithmic Mechanism Design (AMD). Roughly speaking, AMD
requires to design algorithms, where together with speed and quality
of execution, we care about incentive compatibility, i.e., we want the
designer's objective (e.g., optimizing) to be aligned with the
incentives of the people interested in the outcome.

Catering to the incentives of people with limited rationality is a
challenging research direction that requires novel paradigms to design
mechanisms and approximation algorithms. Obviously strategyproof (OSP)
mechanisms have recently emerged as the concept of interest to this
research agenda. We will discuss the rationale behind this concept,
with a particular emphasis on the design and the approximation
guarantee of OSP mechanisms.

The talk is based on joint work with Diodato Ferraioli, Maria
Kyropoulou, Adrian Meier and Paolo Penna.


Speaker Krzysztof R. Apt (CWI Amsterdam and University of Warsaw)
Topic Self-Stabilization Through the Lens of Game Theory
Date, Time 09.03.2018, 10:Mar - 11:Mar

In 1974 E.W. Dijkstra introduced the seminal concept of
self-stabilization that turned out to be one of the main approaches to
fault-tolerant computing. We show here how his three solutions can be
formalized and reasoned about using the concepts of game theory. We
also determine the precise number of steps needed to reach
self-stabilization in his first solution. This is a joint work with
Ehsan Shoja.


Speaker Alison R. Panisson (PUCRS)
Topic Using Argumentation Schemes and Enthymemes to Improve Communication in Multi-Agent Systems
Date, Time 14.12.2017, 11:Dec - 12:Dec

One of the most important aspects of multi-agent systems is communication. Among the communication techniques in multi-agent systems, argumentation-based approaches have received special interest by the community, because they provide a rich form of communication by means of agents exchanging arguments. However, this additional information exchanged by agents could have an extra weight on the communication infrastructure, restricting the usage of argumentation techniques. In this work we propose an argumentation framework whereby agents are able to exchange fewer and shorter messages when engaging in dialogues by omitting information that is common knowledge (e.g., information about a shared multi-agent organisation). In particular, we use the idea of enthymemes, as well as referring to shared argumentation schemes (i.e., reasoning patterns where such arguments were instantiated) and common organisational knowledge to guide argument reconstruction.


Speaker Prof Avi Rosenfeld (Jerusalem College of Technology)
Topic Taming the Curse of Dimensionality in Human-Agent and Medical Systems
Date, Time 05.12.2017, 10:Dec - 11:Dec

The “curse of dimensionality” is a term coined by Bellman in 1957 to describe an exponential increase in the complexity associated with solving certain problems as additional data dimensions are considered. While this “curse” has been noted in several domains, in this talk I will focus on the understanding and overcoming the curse of dimensionality in human-agent and medical domains. I will focus on feature selection approaches as a way to both tame this curse and build effective applications. I will give examples from human-agent systems including my work in assisted driving (adaptive cruise control) and negotiation. When studying medical datasets, I found that current feature selection approaches at times overlook attributes where only a small subset of attribute values contain a strong indication for one of the target values. To overcome this limitation, I developed MIAT, an algorithm that defines Minority Interesting Attribute Thresholds. I demonstrate that at times datasets should be enriched by attributes, such as those created by MIAT, as well as other feature creation algorithms. I found that this approach is not only helpful within medical datasets, but can be generalized across 28 canonical datasets.


Speaker Dr Hamish Carr (School of Computing, Leeds University)
Topic Fiber Surfaces, Jacobi Sets and Reeb Spaces
Date, Time 21.11.2017, 14:Nov - 15:Nov

Classic techniques from scientific visualisation such as isosurfaces and direct volume rendering are used for scalar fields (i.e. functions in space with univariate output). For bivariate and higher functions, rather fewer techniques exist, except in the special case of vector fields. We have recently defined the analogue for isosurfaces in bivariate fields, which we call fiber surfaces. These generalise isosurfaces by taking the inverse image of a curve in the function's range - i.e. a loop on a (continuous) scatterplot. This provides a rigorous and easy to compute geometric surface that can be used to capture regions of interest in the data interactively. Since fiber surfaces are based on marching cubes, many techniques can be carried over, such as acceleration structures, but detailed geometric interpolation is considerably more difficult. We therefore show how to extract fiber surfaces for marching cubes cases and for tetrahedral meshes, and illustrate how ray-tracing can also capture fiber surfaces for trilinear interpolants. Finally, as the data scales, tools such as topological analysis become significant in visualisation. We therefore also show how to generalise contour tree/Reeb graph analysis to bivariate functions with a correct and efficient algorithm for extracting the Reeb space. Finally, we show how the Reeb space can be used to support interactive scatterplot peeling for efficient visualisation of smaller features that would otherwise be hidden by occlusion.


Speaker Dr. Nadin Kokciyan (King's College London)
Topic Context-Based Reasoning on Privacy in Internet of Things
Date, Time 17.11.2017, 14:Nov - 15:Nov

More and more, devices around us are being connected to each other in the realm of Internet of Things (IoT). Their communication and especially collaboration promises useful services to be provided to end users. However, the same communication channels pose important privacy concerns to be raised. It is not clear which information will be shared with whom, for which intents, under which conditions. Existing approaches to privacy advocate policies to be in place to regulate privacy. However, the scale and heterogeneity of the IoT entities make it infeasible to maintain policies among each and every entity in the system. Conversely, it is best if each entity can reason on the privacy using norms and context autonomously. Accordingly, this paper proposes an approach where each entity finds out which contexts it is in based on information it gathers from other entities in the system. The proposed approach uses argumentation to enable IoT entities to reason about their context and decide to reveal information based on it. We demonstrate the applicability of the approach over an IoT scenario.


Speaker Santhilata Kuppili Venkata (King’s College London)
Topic A Community Cache Framework for Distributed Data Centre
Date, Time 13.10.2017, 14:Oct - 15:Oct

The technological innovations and improvements are enabling groups of researchers across the globe to come together and collaborate on research projects. They share data and findings. Large data transfers are necessary to carry on multiple functions.

One of the challenges in dealing with distributed large data is to transfer massive amounts of data from multiple data centres to users. Unless data transfers are planned, organized and regulated carefully, they can become a potential bottleneck and may lead to longer response times and costly maintenance work. We need smart solutions to handle large amounts of data and provide responses quickly. In this thesis, we propose a middleware community caching as a solution to handle large data transfers with the help of data access patterns and links among the data.

This research makes three key contributions. The first contribution is the sub-query fragmentation, that fragments query execution plans into sub-queries. These sub-queries can be modelled as portable and reusable query objects to use and transfer across distributed locations. The sub-queries form regions of interest based on the associations among them. For cache maintenance, it is observed that the association measure performed better when combined with other traditional heuristics such as frequency and time. We have also proposed a cache indexing structure for quick look up of the query objects and easy searching across caches. The second contribution is to develop an optimized data placement scheme in the distributed cache system. We have defined an approach to capturing data associations based on the patterns of user interests. This allowed effective placement of data and thus cooperative sharing of data across cache units. The third contribution is the development of an agent-based model for the evaluation. The model evaluates the effectiveness of proposed cache operations, such as store, search, retrieve, eviction, and prediction under a variety of input conditions.

This research makes a useful support to contemporary technologies such as edge caching (computing) that aims to provide users the required data services with minimum processing.


Speaker Emmanuel Hadoux (University College London UCL)
Topic Computational Persuasion: What is the best next argument?
Date, Time 04.05.2017, 14:May - 15:May
Location -1.04

Unlike traditional abstract argumentation, working on set of argumentation without any notion of temporality, strategic argumentation works on dialogues. In this context, different agents interact by exchanging arguments, in turn. In computational persuasion, we stand in the point of view of one of the agents, trying to have the other one(s) to believe or do something. This is common in, for instance, a political debate or a behaviour change situation (a doctor and a patient). During this talk, we will see how to mix argumentation theory and decision theory and how to compute a policy, the best argument to play in any state of the dialogue. The goal is to maximize the probability to persuade the other agent (the opponent/user/persuadee) under different constraints (knowledge, goal, computational limitations, etc.)


Speaker Elizabeth Black and David Marzagão (King's College London)
Topic Planning for Persuasion and Multi-Agent Flag Coordination Games
Date, Time 03.05.2017, 12:May - 13:May

There will be two presentations, each lasting ~20 minutes.

Planning for persuasion


Elizabeth Black, Amanda Coles, Christopher Hampson

Abstract:
We aim to find a winning strategy that determines the arguments a proponent should assert during a dialogue such that it will successfully persuade its opponent of some goal arguments, regardless of the strategy employed by the opponent. By restricting the strategies we consider for the proponent to what we call simple strategies and by modelling this as a planning problem, we are able to use an automated planner to generate optimal simple strategies for realistically sized problems. These strategies guarantee with a certain probability (determined by the proponent’s uncertain model of the arguments available to the opponent) that the proponent will be successful no matter which arguments the opponent chooses to assert. Our model accounts for the possibility that the proponent, when asserting arguments, may give away knowledge that the opponent can subsequently use against it; we examine how this affects both time taken to find an optimal simple strategy and its probability of guaranteed success.

Multi-Agent Flag Coordination Games


David Kohan Marzagão, Nicolás Rivera, Colin Cooper, Peter McBurney, Kathleen Steinhöfel

Abstract:
Many multi-agent coordination problems can be understood as autonomous local choices between a finite set of options, with each local choice undertaken simultaneously without explicit coordination between decision-makers, and with a shared goal of achieving a desired global state or states. Examples of such problems include classic consensus problems between nodes in a distributed computer network and the adoption of competing technology standards. We model such problems as a multi-round game between agents having flags of different colours to represent the finite choice options, and all agents seeking to achieve global patterns of colours through a succession of local colour-selection choices.
We generalise and formalise the problem, proving results for the probabilities of achievement of common desired global states when these games are undertaken on bipartite graphs, extending known results for non-bipartite graphs. We also calculate probabilities for the game entering infinite cycles of non-convergence. In addition, we present a game-theoretic approach to the problem that has a mixed-strategy Nash equilibrium where two players can simultaneously flip the colour of one of the opponent's nodes in the bipartite graph before or during a flag-coordination game.


Speaker Dr Tim Miller (University of Melbourne)
Topic Social Planning -- Reasoning with and about others
Date, Time 06.02.2017, 11:Feb - 12:Feb

Successful human teams operate by the individuals in those teams modelling the relevant perspective of their team mates, including what their team members can do, what they know or believe, and what their intentions are; in short, they have a Theory of Mind about their team members. To do this, they may consider how their team members are about to act, how this affects the outcomes of their own actions, and what information needs to be shared. We call this 'social planning', reflecting that such planning itself is a social activity that requires thinking about and communicating with others.

Motivated by the problem of designing and implementing artificial agents that are able to work collaboratively as part of a human-agent team, we hypothesis that artificial agents will be more human-intuitive, transparent, and trusted if they are able to adopt social planning. In recent work, we have leveraged state-of-the-art planning techniques to realise social planning in several application of areas, including both collaborative and adversarial settings, mostly related to projects from the Australian Defence Force. In this talk, I will discuss some of these techniques -- in particular, multi-agent epistemic planning -- and some of these applications.


You can also view upcoming seminars .