RoboTIPS
Developing Responsible Robots for the Digital Economy
RoboTIPS: Developing Responsible Robots for the Digital Economy, aims to develop a new approach to responsible innovation (RI) in the context of trustworthy and secure technologies. The principle challenge is to embed responsible innovation into technology developers’ practices and to create positive cases of RI in action.
The 5 year project is an EPSRC Established Career Digital Economy Fellowship awarded to Professor Marina Jirotka. it is in collaboration with Alan Winfield, Professor of Robot Ethics at the University of the West of England. Marina and Alan have been working together for some time to develop the concept of the ‘ethical black box’. This advocates that robots should be equipped with a ‘black box’, equivalent to the flight data recorders used in aviation. The black box continuously records sensor and relevant internal status data and can be extended in scope to also capture the AI decision-making process and environmental factors occurring before an adverse incident. This data can provide crucial evidence following accidents or adverse incidents involving robots. The information provided can help us to understand why a robot behaved in the way it did and then make recommendations for changes to prevent similar incidents or limit the potential damage caused. The ethical black box thus provides a pathway to greater safety and public trust in robots.
The project brings together an experienced team of researchers, including Dr Helena Webb, who works closely with Marina as part of the Human Centred Computing theme at Oxford.
Throughout the 5 years, Marina and her team will develop:
- a grounded approach to RI for technology in the digital economy
- practical methods for developers of autonomous systems to design accountable technologies
- new modes of accountability and new concepts of liability suitable for collectives of both human and robot agent
- a first concrete example of a Digital Economy technology developed through a responsible innovation approach
Investigators:
- PI: Marina Jirotka
- Researcher: Helena Webb
RETCON
PETRAS2: Red Teaming the Connected Internet of Things
RETCON is a subproject of PETRAS2, which explores wide and creative methodologies to deliberately influence or disrupt the behaviour of (sociotechnical) systems involving IoT devices. This will generate a ‘red team’ methodology for the many ways in which IoT systems might be compromised: penetration testing with demonstrators, simulation systems, examining cheating in connected gamified environments (given the norms they rely on), and developing ‘imposter’ devices. AI techniques are relevant as not only can they be interfered with, but also used as part of attack—such as if they ‘pretend to be human’. In the latter respect, the RETCON project will work with secondary projects from the usability lens, such as the UncanAI project. Sociotechnical RETCON attacks will provide dual benefit to other projects, such as serving to test and improve the design patterns generated through the RIoTE project.
About PETRAS2
The PETRAS National Centre of Excellence exists to ensure that technological advances in the Internet of Things (IoT) are developed and applied in consumer and business contexts, safely and securely. We do this by considering social and technical issues relating to the cybersecurity of IoT devices, systems and networks.
- PI: Max Van Kleek (Computer Science)
- Co-I: Dave De Roure (Engineering)
- Researchers: Petar Radanliev, Reuben Binns
Recent changes to data protection regulation, particularly in Europe, are changing the design landscape for smart devices, requiring new design techniques to ensure that devices are able to adequately protect users’ data. A particularly interesting space in which to explore and address these challenges is the smart home, which presents a multitude of difficult social and technical problems in an intimate and highly private context. This position paper outlines the motivation and research approach of a new project aiming to inform the future of data protection by design and by default in smart homes through a combination of ethnography and speculative design.
- PI: Ivan Flechais
- Co-I: Max Van Kleek
- Researchers: William Seymour, Martin Kraemer, George Chalhoub, Claudine Tinsman
ReTIPS
PETRAS: Respectful Things in Private Spaces
Respectul Things in Private Spaces was a research project under the umbrella of PETRAS which examined the privacy needs of highly private spaces within people’s homes to inform how smart home IoT devices might be re-designed for them.
Several of the outputs of the project include ARETHA and IoT Refine.
About PETRAS
The PETRAS National Centre of Excellence exists to ensure that technological advances in the Internet of Things (IoT) are developed and applied in consumer and business contexts, safely and securely. We do this by considering social and technical issues relating to the cybersecurity of IoT devices, systems and networks.
- PI: Nigel Shadbolt (Computer Science)
- Co-I: Max Van Kleek (Computer Science)
- Researchers: Reuben Binns
UnBias
Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy
In an age of ubiquitous data collecting, analysis and processing, how can citizens judge the trustworthiness and fairness of systems that heavily rely on algorithms? News feeds, search engine results and product recommendations increasingly use personalization algorithms to help us cut through the mountains of available information and find those bits that are most relevant, but how can we know if the information we get really is the best match for our interests?
There is no such thing as a neutral algorithm. As anyone who has ever created something knows, even something as simple as a meal, the act of creating inevitably involves choices that will affect the properties of the final product. Despite this truism recommendations and selections made by algorithms are commonly presented to consumers as if they are inherently free from (human) bias and ‘fair’ because the decisions are ‘based on data’. During the recent controversy about possible political bias in Facebook’s Trending Topics for instance the focus was almost exclusively on the role of the human editors even though 95% or more of the news selection process is done by algorithms. Human judgements however are ultimately also based on data.
Starting in September 2016 the EPSRC funded project “UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy” will look at the user experience of algorithm driven internet services and the process of algorithm design. A large part of this work will include user group studies to understand the concerns and perspectives of citizens. UnBias aims to provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ co-produced with young people and other stakeholders that will include educational materials and resources to support youth understanding about online environments as well as raise awareness among online providers about the concerns and rights of young internet users. The project is relevant for young people as well as society as a whole to ensure trust and transparency are not missing from the internet. The project is led by the University of Nottingham in collaboration with the University of Oxford and University of Edinburgh. More information about project activities can be found here.
SOCIAM
The Theory and Practice of Social Machines
From Wikipedia to Facebook, social machines have become integral to our daily lives. Digital networked technology now routinely enables coordination of collective action, releasing the power of decentralised hybrid human-machine problem-solving at scale.
The SOCIAM project was funded by the EPSRC from 2012 to 2018 and brought together three leading UK universities to produce the first major interdisciplinary research insights into the realm of social machines. Using methods from computer science, mathematics, social science, network science and data science, SOCIAM delivers theory, data and practical knowledge for innovative design of social machines.
In 2019, the project’s work was summarised in the book ‘The Theory and Practice of Social Machines’ by Nigel Shadbolt, Kieron O’Hara, David De Roure, and Wendy Hall. In addition to explaining the concept of social machines in detail, and describing the innovative research methods pioneered within SOCIAM, the book considers ethical issues and future research trends. The book is published by Springer and available in hardcover and as an ebook.