|Speaker||Martin Hoefer (Goethe-Universität Frankfurt am Main)|
|Topic||Approximation Algorithms for Nash Social Welfare|
|Date, Time||13.12.2018, 16:00 - 17:00|
|Location||B4 (North Wing), Strand (Main Campus)|
When allocating a set of items to agents, an interesting objective is
Nash social welfare (NSW). Here one aims to maximize the product of the
individual valuations. NSW can be seen as a trade-off between
maximization of the sum of valuations (social welfare) and the minimum
valuation (egalitarian welfare). Moreover, NSW satisfies an invariance
to scaling of each valuation function by a constant factor.
If goods are divisible, NSW-optimal allocations can often be efficiently
computed using machinery from convex programming. In contrast,
surprisingly little is known about algorithms to compute near-optimal
allocations for indivisible goods.
In this talk, I will review the state of the art and present our recent
advances for indivisble goods and agents with subclasses of submodular
valuations. Our main result is a 1.445-approximation algorithm. In
addition, the computed allocations satisfy approximate envy-freeness
(based on joint work with Bhaskar Ray Chaudhuri, Yun Kuen Cheung, Naveen
Garg, Jugal Garg, Kurt Mehlhorn)
Room directions: From main Strand reception, go straight ahead then exit through the door on your left. Cross the courtyard into the building in front of you (North Wing), carry on straight along the corridor, B4 is on your left.
|Speaker||Victor Naroditskiy (OneMarketData)|
|Topic||Market Manipulation Detection: Research Agenda|
|Date, Time||29.10.2018, 15:00 - 16:30|
|Location||Bush House (SE) 2.09, south east entrance|
Market manipulation refers to bidding strategies aimed at influencing the price in financial markets like NYSE, NASDAQ, and LSE. Recent regulation including MiFID II and Dodd Frank Act made market manipulation illegal and resulted in billions of fines to trading firms and brokers. Detecting market manipulation has become a priority for regulators, markets, and market participants alike.
Market manipulation detection is an appealing research direction for three reasons. Firstly, financial markets are a rare example of real-world mechanisms that are simple from the modeling point of view: they are continuous double auctions. Secondly, there is a lot of data that can be made available. "Market data" is a collection of all bids submitted to the auctions and of all the trades that resulted. The data is structured and simple and there are billions of data points available each day. Thirdly, market manipulation is real: that is, any findings that help characterize and detect market manipulation may make a difference in the real world.
I will outline research directions and show examples of data that is available.
|Speaker||Carmine Ventre (University of Essex)|
|Topic||Demystifying Obvious Strategyproofness|
|Date, Time||22.10.2018, 15:00 - 16:00|
|Location||Bush House (SE) 1.05 (South-East entrance)|
The presentation will focus on some of the recent work of the speaker,
which aims at building the theoretical foundations for a more applied
use of Algorithmic Mechanism Design (AMD). Roughly speaking, AMD
requires to design algorithms, where together with speed and quality
of execution, we care about incentive compatibility, i.e., we want the
designer's objective (e.g., optimizing) to be aligned with the
incentives of the people interested in the outcome.
Catering to the incentives of people with limited rationality is a
challenging research direction that requires novel paradigms to design
mechanisms and approximation algorithms. Obviously strategyproof (OSP)
mechanisms have recently emerged as the concept of interest to this
research agenda. We will discuss the rationale behind this concept,
with a particular emphasis on the design and the approximation
guarantee of OSP mechanisms.
The talk is based on joint work with Diodato Ferraioli, Maria
Kyropoulou, Adrian Meier and Paolo Penna.
|Speaker||Krzysztof R. Apt (CWI Amsterdam and University of Warsaw)|
|Topic||Self-Stabilization Through the Lens of Game Theory|
|Date, Time||09.03.2018, 10:00 - 11:30|
|Location||Bush House North East Wing, BH(NE)1.04|
In 1974 E.W. Dijkstra introduced the seminal concept of
self-stabilization that turned out to be one of the main approaches to
fault-tolerant computing. We show here how his three solutions can be
formalized and reasoned about using the concepts of game theory. We
also determine the precise number of steps needed to reach
self-stabilization in his first solution. This is a joint work with
|Speaker||Alison R. Panisson (PUCRS)|
|Topic||Using Argumentation Schemes and Enthymemes to Improve Communication in Multi-Agent Systems|
|Date, Time||14.12.2017, 11:00 - 12:00|
|Location||S0.03, Strand building|
One of the most important aspects of multi-agent systems is communication. Among the communication techniques in multi-agent systems, argumentation-based approaches have received special interest by the community, because they provide a rich form of communication by means of agents exchanging arguments. However, this additional information exchanged by agents could have an extra weight on the communication infrastructure, restricting the usage of argumentation techniques. In this work we propose an argumentation framework whereby agents are able to exchange fewer and shorter messages when engaging in dialogues by omitting information that is common knowledge (e.g., information about a shared multi-agent organisation). In particular, we use the idea of enthymemes, as well as referring to shared argumentation schemes (i.e., reasoning patterns where such arguments were instantiated) and common organisational knowledge to guide argument reconstruction.
|Speaker||Prof Avi Rosenfeld (Jerusalem College of Technology)|
|Topic||Taming the Curse of Dimensionality in Human-Agent and Medical Systems|
|Date, Time||05.12.2017, 10:00 - 11:00|
|Location||S3.30, Strand building|
The “curse of dimensionality” is a term coined by Bellman in 1957 to describe an exponential increase in the complexity associated with solving certain problems as additional data dimensions are considered. While this “curse” has been noted in several domains, in this talk I will focus on the understanding and overcoming the curse of dimensionality in human-agent and medical domains. I will focus on feature selection approaches as a way to both tame this curse and build effective applications. I will give examples from human-agent systems including my work in assisted driving (adaptive cruise control) and negotiation. When studying medical datasets, I found that current feature selection approaches at times overlook attributes where only a small subset of attribute values contain a strong indication for one of the target values. To overcome this limitation, I developed MIAT, an algorithm that defines Minority Interesting Attribute Thresholds. I demonstrate that at times datasets should be enriched by attributes, such as those created by MIAT, as well as other feature creation algorithms. I found that this approach is not only helpful within medical datasets, but can be generalized across 28 canonical datasets.
|Speaker||Dr Hamish Carr (School of Computing, Leeds University)|
|Topic||Fiber Surfaces, Jacobi Sets and Reeb Spaces|
|Date, Time||21.11.2017, 14:00 - 15:00|
|Location||Bush House (S)5.01|
Classic techniques from scientific visualisation such as isosurfaces and direct volume rendering are used for scalar fields (i.e. functions in space with univariate output). For bivariate and higher functions, rather fewer techniques exist, except in the special case of vector fields. We have recently defined the analogue for isosurfaces in bivariate fields, which we call fiber surfaces. These generalise isosurfaces by taking the inverse image of a curve in the function's range - i.e. a loop on a (continuous) scatterplot. This provides a rigorous and easy to compute geometric surface that can be used to capture regions of interest in the data interactively. Since fiber surfaces are based on marching cubes, many techniques can be carried over, such as acceleration structures, but detailed geometric interpolation is considerably more difficult. We therefore show how to extract fiber surfaces for marching cubes cases and for tetrahedral meshes, and illustrate how ray-tracing can also capture fiber surfaces for trilinear interpolants. Finally, as the data scales, tools such as topological analysis become significant in visualisation. We therefore also show how to generalise contour tree/Reeb graph analysis to bivariate functions with a correct and efficient algorithm for extracting the Reeb space. Finally, we show how the Reeb space can be used to support interactive scatterplot peeling for efficient visualisation of smaller features that would otherwise be hidden by occlusion.
|Speaker||Dr. Nadin Kokciyan (King's College London)|
|Topic||Context-Based Reasoning on Privacy in Internet of Things|
|Date, Time||17.11.2017, 14:00 - 15:00|
|Location||Bush House (S)5.01|
More and more, devices around us are being connected to each other in the realm of Internet of Things (IoT). Their communication and especially collaboration promises useful services to be provided to end users. However, the same communication channels pose important privacy concerns to be raised. It is not clear which information will be shared with whom, for which intents, under which conditions. Existing approaches to privacy advocate policies to be in place to regulate privacy. However, the scale and heterogeneity of the IoT entities make it infeasible to maintain policies among each and every entity in the system. Conversely, it is best if each entity can reason on the privacy using norms and context autonomously. Accordingly, this paper proposes an approach where each entity finds out which contexts it is in based on information it gathers from other entities in the system. The proposed approach uses argumentation to enable IoT entities to reason about their context and decide to reveal information based on it. We demonstrate the applicability of the approach over an IoT scenario.
|Speaker||Santhilata Kuppili Venkata (King’s College London)|
|Topic||A Community Cache Framework for Distributed Data Centre|
|Date, Time||13.10.2017, 14:00 - 15:00|
|Location||(N)5.02, Bush House|
The technological innovations and improvements are enabling groups of researchers across the globe to come together and collaborate on research projects. They share data and findings. Large data transfers are necessary to carry on multiple functions.
One of the challenges in dealing with distributed large data is to transfer massive amounts of data from multiple data centres to users. Unless data transfers are planned, organized and regulated carefully, they can become a potential bottleneck and may lead to longer response times and costly maintenance work. We need smart solutions to handle large amounts of data and provide responses quickly. In this thesis, we propose a middleware community caching as a solution to handle large data transfers with the help of data access patterns and links among the data.
This research makes three key contributions. The first contribution is the sub-query fragmentation, that fragments query execution plans into sub-queries. These sub-queries can be modelled as portable and reusable query objects to use and transfer across distributed locations. The sub-queries form regions of interest based on the associations among them. For cache maintenance, it is observed that the association measure performed better when combined with other traditional heuristics such as frequency and time. We have also proposed a cache indexing structure for quick look up of the query objects and easy searching across caches. The second contribution is to develop an optimized data placement scheme in the distributed cache system. We have defined an approach to capturing data associations based on the patterns of user interests. This allowed effective placement of data and thus cooperative sharing of data across cache units. The third contribution is the development of an agent-based model for the evaluation. The model evaluates the effectiveness of proposed cache operations, such as store, search, retrieve, eviction, and prediction under a variety of input conditions.
This research makes a useful support to contemporary technologies such as edge caching (computing) that aims to provide users the required data services with minimum processing.
|Speaker||Emmanuel Hadoux (University College London UCL)|
|Topic||Computational Persuasion: What is the best next argument?|
|Date, Time||04.05.2017, 14:00 - 15:00|
|Location||-1.04 (Chesham Building)|
Unlike traditional abstract argumentation, working on set of argumentation without any notion of temporality, strategic argumentation works on dialogues. In this context, different agents interact by exchanging arguments, in turn. In computational persuasion, we stand in the point of view of one of the agents, trying to have the other one(s) to believe or do something. This is common in, for instance, a political debate or a behaviour change situation (a doctor and a patient). During this talk, we will see how to mix argumentation theory and decision theory and how to compute a policy, the best argument to play in any state of the dialogue. The goal is to maximize the probability to persuade the other agent (the opponent/user/persuadee) under different constraints (knowledge, goal, computational limitations, etc.)
|Speaker||Elizabeth Black and David Marzagão (King's College London)|
|Topic||Planning for Persuasion and Multi-Agent Flag Coordination Games|
|Date, Time||03.05.2017, 12:00 - 13:00|
There will be two presentations, each lasting ~20 minutes.
Planning for persuasion
Elizabeth Black, Amanda Coles, Christopher Hampson
We aim to find a winning strategy that determines the arguments a proponent should assert during a dialogue such that it will successfully persuade its opponent of some goal arguments, regardless of the strategy employed by the opponent. By restricting the strategies we consider for the proponent to what we call simple strategies and by modelling this as a planning problem, we are able to use an automated planner to generate optimal simple strategies for realistically sized problems. These strategies guarantee with a certain probability (determined by the proponent’s uncertain model of the arguments available to the opponent) that the proponent will be successful no matter which arguments the opponent chooses to assert. Our model accounts for the possibility that the proponent, when asserting arguments, may give away knowledge that the opponent can subsequently use against it; we examine how this affects both time taken to find an optimal simple strategy and its probability of guaranteed success.
Multi-Agent Flag Coordination Games
David Kohan Marzagão, Nicolás Rivera, Colin Cooper, Peter McBurney, Kathleen Steinhöfel
Many multi-agent coordination problems can be understood as autonomous local choices between a finite set of options, with each local choice undertaken simultaneously without explicit coordination between decision-makers, and with a shared goal of achieving a desired global state or states. Examples of such problems include classic consensus problems between nodes in a distributed computer network and the adoption of competing technology standards. We model such problems as a multi-round game between agents having flags of different colours to represent the finite choice options, and all agents seeking to achieve global patterns of colours through a succession of local colour-selection choices.
We generalise and formalise the problem, proving results for the probabilities of achievement of common desired global states when these games are undertaken on bipartite graphs, extending known results for non-bipartite graphs. We also calculate probabilities for the game entering infinite cycles of non-convergence. In addition, we present a game-theoretic approach to the problem that has a mixed-strategy Nash equilibrium where two players can simultaneously flip the colour of one of the opponent's nodes in the bipartite graph before or during a flag-coordination game.
|Speaker||Dr Tim Miller (University of Melbourne)|
|Topic||Social Planning -- Reasoning with and about others|
|Date, Time||06.02.2017, 11:00 - 12:00|
Successful human teams operate by the individuals in those teams modelling the relevant perspective of their team mates, including what their team members can do, what they know or believe, and what their intentions are; in short, they have a Theory of Mind about their team members. To do this, they may consider how their team members are about to act, how this affects the outcomes of their own actions, and what information needs to be shared. We call this 'social planning', reflecting that such planning itself is a social activity that requires thinking about and communicating with others.
Motivated by the problem of designing and implementing artificial agents that are able to work collaboratively as part of a human-agent team, we hypothesis that artificial agents will be more human-intuitive, transparent, and trusted if they are able to adopt social planning. In recent work, we have leveraged state-of-the-art planning techniques to realise social planning in several application of areas, including both collaborative and adversarial settings, mostly related to projects from the Australian Defence Force. In this talk, I will discuss some of these techniques -- in particular, multi-agent epistemic planning -- and some of these applications.
You can also view upcoming seminars .