Playing this video requires sharing information with YouTube.
More information

by: Konrad Kollnig
Anastasia Shuba
Max Van Kleek
Reuben Binns
Sir Nigel Shadbolt

 
12 May 2022
Published in FAccT 2022

Tracking is a highly privacy-invasive data collection practice that has been ubiquitous in mobile apps for many years due to its role in supporting advertising-based revenue models. In response, Apple introduced two significant changes with iOS 14: App Tracking Transparency (ATT), a mandatory opt-in system for enabling tracking on iOS, and Privacy Nutrition Labels, which disclose what kinds of data each app processes. So far, the impact of these changes on individual privacy and control has not been well understood. This paper addresses this gap by analysing two versions of 1,759 iOS apps from the UK App Store: one version from before iOS 14 and one that has been updated to comply with the new rules.

We find that Apple’s new policies, as promised, prevent the collection of the Identifier for Advertisers (IDFA), an identifier for cross-app tracking. Smaller data brokers that engage in invasive data practices will now face higher challenges in tracking users - a positive development for privacy. However, the number of tracking libraries has roughly stayed the same in the studied apps. Many apps still collect device information that can be used to track users at a group level (cohort tracking) or identify individuals probabilistically (fingerprinting). We find real-world evidence of apps computing and agreeing on a fingerprinting-derived identifier through the use of server-side code, thereby violating Apple’s policies. We find that Apple itself engages in some forms of tracking and exempts invasive data practices like first-party tracking and credit scoring. We also find that the new Privacy Nutrition Labels are sometimes inaccurate and misleading.

Overall, our findings suggest that, while tracking individual users is more difficult now, the changes reinforce existing market power of gatekeeper companies with access to large troves of first-party data and motivate a countermovement.

by: Konrad Kollnig
Anastasia Shuba
Reuben Binns
Max Van Kleek
Sir Nigel Shadbolt

 
01 Apr 2022
Published in PETS 2022

While many studies have looked at privacy properties of the Android and Google Play app ecosystem, comparatively much less is known about iOS and the Apple App Store, the most widely used ecosystem in the US. At the same time, there is increasing competition around privacy between these smartphone operating system providers.

In this paper, we present a study of 24k Android and iOS apps from 2020 along several dimensions relating to user privacy. We find that third-party tracking and the sharing of unique user identifiers was widespread in apps from both ecosystems, even in apps aimed at children. In the children’s category, iOS apps tended to use fewer advertising-related tracking than their Android counterparts, but could more often access children’s location.

Across all studied apps, our study highlights widespread potential violations of US, EU and UK privacy law, including 1) the use of third-party tracking without user consent, 2) the lack of parental consent before sharing personally identifiable information (PII) with third-parties in children’s apps, 3) the non-data-minimising configuration of tracking libraries, 4) the sending of personal data to countries without an adequate level of data protection, and 5) the continued absence of transparency around tracking, partly due to design decisions by Apple and Google.

Overall, we find that neither platform is clearly better than the other for privacy across the dimensions we studied.

by: Siddhartha Datta
Konrad Kollnig
Sir Nigel Shadbolt

 
21 Dec 2021
Published in IUI 2022

Digital harms are widespread in the mobile ecosystem. As these devices gain ever more prominence in our daily lives, so too increases the potential for malicious attacks against individuals. The last line of defense against a range of digital harms - including digital distraction, political polarisation through hate speech, and children being exposed to damaging material - is the user interface. This work introduces GreaseTerminator to enable researchers to develop, deploy, and test interventions against these harms with end-users. We demonstrate the ease of intervention development and deployment, as well as the broad range of harms potentially covered with GreaseTerminator in five in-depth case studies.

Third-party tracking, the collection and sharing of behavioural data about individuals, is a significant and ubiquitous privacy threat in mobile apps. The EU General Data Protection Regulation (GDPR) was introduced in 2018 to protect personal data better, but there exists, thus far, limited empirical evidence about its efficacy. This paper studies tracking in nearly two million Android apps from before and after the introduction of the GDPR. Our analysis suggests that there has been limited change in the presence of third-party tracking in apps, and that the concentration of tracking capabilities among a few large gatekeeper companies persists. However, change might be imminent.

Playing this video requires sharing information with YouTube.
More information

Winner of the Student Paper Award of the FPF Privacy Papers for Policymakers 2022

Third-party tracking allows companies to collect users’ behavioural data and track their activity across digital devices. This can put deep insights into users’ private lives into the hands of strangers, and often happens without users’ awareness or explicit consent. EU and UK data protection law, however, requires consent, both 1) to access and store information on users’ devices and 2) to legitimate the processing of personal data as part of third-party tracking, as we analyse in this paper.

This paper further investigates whether and to what extent consent is implemented in mobile apps. First, we analyse a representative sample of apps from the Google Play Store. We find that most apps engage in third-party tracking, but few obtained consent before doing so, indicating potentially widespread violations of EU and UK privacy law. Second, we examine the most common third-party tracking libraries in detail. While most acknowledge that they rely on app developers to obtain consent on their behalf, they typically fail to put in place robust measures to ensure this: disclosure of consent requirements is limited; default consent implementations are lacking; and compliance guidance is difficult to find, hard to read, and poorly maintained.

Playing this video requires sharing information with YouTube.
More information

by: Anirudh Ekambaranathan
Jun Zhao
Max Van Kleek

 
06 May 2021
Published in CHI 2021

The industry for children’s apps is thriving at the cost of children’s privacy: these apps routinely disclose children’s data to multiple data trackers and ad networks. As children spend increasing time online, such exposure accumulates to long-term privacy risks. In this paper, we used a mixed-methods approach to investigate why this is happening and how developers might change their practices. We base our analysis against 5 leading data protection frameworks that set out requirements and recommendations for data collection in children’s apps. To understand developers’ perspectives and constraints, we conducted 134 surveys and 20 semi-structured interviews with popular Android children’s app developers. Our analysis revealed that developers largely respect children’s best interests; however, they have to make compromises due to limited monetisation options, perceived harmlessness of certain third-party libraries, and lack of availability of design guidelines. We identified concrete approaches and directions for future research to help overcome these barriers.

Playing this video requires sharing information with YouTube.
More information

Dark patterns in mobile apps take advantage of cognitive biases of end-users and can have detrimental effects on people’s lives. Despite growing research in identifying remedies for dark patterns and established solutions for desktop browsers, there exists no established methodology to reduce dark patterns in mobile apps. Our work introduces GreaseDroid, a community-driven app modification framework enabling non-expert users to disable dark patterns in apps selectively.

by: Liz Dowthwaite‚ Helen Creswick‚ Virginia Portillo‚ Jun Zhao‚ Menisha Patel‚ Elvira Perez Vallejos‚ Ansgar Koene
Marina Jirotka

 
17 Jun 2020
Published in IDC 2020

Children and young people make extensive and varied use of digital and online technologies, yet issues about how their personal data may be collected and used by online platforms are rarely discussed. Additionally, despite calls to increase awareness, schools often do not cover these topics, instead focusing on online safety issues, such as being approached by strangers, cyberbullying or access to inappropriate content. This paper presents the results of one of the activities run as part of eleven workshops with 13-18 year olds, using co-designed activities to encourage critical thinking. Sets of ‘data cards’ were used to stimulate discussion about sharing and selling of personal data by online technology companies. Results highlight the desire and need for increased awareness about the potential uses of personal data amongst this age group, and the paper makes recommendations for embedding this into school curriculums as well as incorporating it into interaction design, to allow young people to make informed decisions about their online lives.

Playing this video requires sharing information with YouTube.
More information

The increasingly widespread use of ‘smart’ devices has raised multifarious ethical concerns regarding their use in domestic spaces. Previous work examining such ethical dimensions has typically either involved empirical studies of concerns raised by specific devices and use contexts, or alternatively expounded on abstract concepts like autonomy, privacy or trust in relation to `smart homes’ in general.

This paper attempts to bridge these approaches by asking what features of smart devices users consider as rendering them `smart’ and how these relate to ethical concerns. Through a multimethod investigation including surveys with smart device users (n=120) and semi-structured interviews (n=15), we identify and describe eight types of smartness and explore how they engender a variety of ethical concerns including privacy, autonomy, and disruption of the social order. We argue that this middle ground, between concerns arising from particular devices and more abstract ethical concepts, can better anticipate potential ethical concerns regarding smart devices.

Playing this video requires sharing information with YouTube.
More information

Beyond being the world’s largest social network, Facebook is for many also one of its greatest sources of digital distraction. For students, problematic use has been associated with negative effects on academic achievement and general wellbeing.

To understand what strategies could help users regain control, we investigated how simple interventions to the Facebook UI affect behaviour and perceived control. We assigned 58 university students to one of three interventions: goal reminders, removed newsfeed, or white background (control). We logged use for 6 weeks, applied interventions in the middle weeks, and administered fortnightly surveys.

Both goal reminders and removed newsfeed helped participants stay on task and avoid distraction. However, goal reminders were often annoying, and removing the newsfeed made some fear missing out on information. Our findings point to future interventions such as controls for adjusting types and amount of available information, and flexible blocking which matches individual definitions of ‘distraction’.

Playing this video requires sharing information with YouTube.
More information

Connected devices in the home represent a potentially grave new privacy threat due to their unfettered access to the most personal spaces in people’s lives. Prior work has shown that despite concerns about such devices, people often lack sufficient awareness, understanding, or means of taking effective action.

To explore the potential for new tools that support such needs directly we developed Aretha, a privacy assistant technology probe that combines a network disaggregator, personal tutor, and firewall, to empower end-users with both the knowledge and mechanisms to control disclosures from their homes. We deployed Aretha in three households over six weeks, with the aim of understanding how this combination of capabilities might enable users to gain awareness of data disclosures by their devices, form educated privacy preferences, and to block unwanted data flows.

The probe, with its novel affordances—and its limitations—prompted users to co-adapt, finding new control mechanisms and suggesting new approaches to address the challenge of regaining privacy in the connected home.

by: Menisha Patel
Helena Webb
Marina Jirotka
Alan Davoust
Ross Gales
Michael Rovatsos

 
30 Dec 2019
Published in ECIAIR 2019

In this paper we describe our experience conducting an ‘ethical hackathon’ to promote the ethical design of AI systems. The model of the ethical hackathon has been developed by researchers in the Human Centred Computing theme as a novel twist on the conventional hackathon competition. Ethical hackathons are fun, educational events in which interdisciplinary teams compete on a design challenge that requires them to consider how responsibility mechanisms can be embedded into what they are building.

The ethical hackathon described in this paper was part of the UnBias project. In the paper we highlight the potential for these events to foster the ethical design and development of AI systems but also identify some practical challenges in running them. We conclude that a successful ethical hackathon needs to foster genuine interdisciplinarity and carefully manage participant expectations. We build on our own experiences by suggesting ways to optimise the ethical hackathon model.

Playing this video requires sharing information with YouTube.
More information

X-Ray Refine

Supporting the Exploration and Refinement of Information Exposure Resulting from Smartphone Apps

Most smartphone apps collect and share information with various first and third parties; yet, such data collection practices remain largely unbeknownst to, and outside the control of, end-users.

In this paper, we seek to understand the potential for tools to help people refine their exposure to third parties, resulting from their app usage. We designed an interactive, focus-plus-context display called X-Ray Refine (Refine) that uses models of over 1 million Android apps to visualise a person’s exposure profile based on their durations of app use. To support exploration of mitigation strategies, Refine can simulate actions such as app usage reduction, removal, and substitution.

A lab study of Refine found participants achieved a high-level understanding of their exposure, and identified data collection behaviours that violated both their expectations and privacy preferences. Participants also devised bespoke strategies to achieve privacy goals, identifying the key barriers to achieving them.

by: Adrian Gradinar
Max Van Kleek
Larissa Pschetz
Paul Coulton
Joseph Lindley

 
16 Sep 2019
Published in PETRAS

In our first Little Book in the PETRAS series we explained the term Internet of Things (IoT) as follows:

“… the term [is used] to describe objects or things that can be interconnected via the Internet. This allows them to be readable, recognizable, locatable, addressable, and/or controllable by computers. The things themselves can be literally anything. Later in the book we use examples such as a kettle, a door lock, an electricity meter, a toy doll and a television but it’s important to remember that there is no limit on what is or is not an IoT thing. Anything that is connected to the Internet is arguably part of the IoT including us.”

In this book we focus on IoT products and services targeting the consumer market, in particular, those for use in our homes. These connected products are often referred to as ‘smart’ and our IoT-enabled homes are often called, ‘smart homes’. The promise of smart homes filled with connected products is frequently promoted as a way of making our lives easier and more convenient. For example, the Roomba robotic vacuum cleaner claims to allow you to “Forget about vacuuming for weeks at a time” and that it [the robot] is smart enough to know if your cat has tracked its litter through the house.

Playing this video requires sharing information with YouTube.
More information

by: Ulrik Lyngs
Kai Lukhoff
Petr Slovak
Reuben Binns
Adam Slack
Michael Inzlicht
Max Van Kleek
Sir Nigel Shadbolt

 
01 May 2019
Published in CHI 2019

Many people struggle to control their use of digital devices. However, our understanding of the design mechanisms that support user self-control remains limited.

In this paper, we make two contributions to HCI research in this space: first, we analyse 367 apps and browser extensions from the Google Play, Chrome Web, and Apple App stores to identify common core design features and intervention strategies afforded by current tools for digital self-control. Second, we adapt and apply an integrative dual systems model of self-regulation as a framework for organising and evaluating the design features found.

Our analysis aims to help the design of better tools in two ways: (i) by identifying how, through a wellestablished model of self-regulation, current tools overlap and differ in how they support self-control; and (ii) by using the model to reveal underexplored cognitive mechanisms that could aid the design of new tools.

Playing this video requires sharing information with YouTube.
More information

Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to ‘meaningful information about the logic’ behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people’s perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions.

Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles—under repeated exposure of one style, scenario effects obscure any explanation effects.

Our results suggest there may be no ‘best’ approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.

by: Reuben Binns
Ulrik Lyngs
Max Van Kleek
Jun Zhao
Timothy Libert
Sir Nigel Shadbolt

 
30 May 2018
Published in WebSci '18

This paper won a ‘Best Paper’ award at WebSci ‘18, the 10th ACM Conference on Web Science.

Third party tracking allows companies to identify users and track their behaviour across multiple digital services. This paper presents an empirical study of the prevalence of third-party trackers on 959,000 apps from the US and UK Google Play stores.

We find that most apps contain third party tracking, and the distribution of trackers is long-tailed with several highly dominant trackers accounting for a large portion of the coverage. The extent of tracking also differs between categories of apps; in particular, news apps and apps targeted at children appear to be amongst the worst in terms of the number of third party trackers associated with them.

Third party tracking is also revealed to be a highly trans-national phenomenon, with many trackers operating in jurisdictions outside the EU. Based on these findings, we draw out some significant legal compliance challenges facing the tracking industry.

Playing this video requires sharing information with YouTube.
More information

by: Michael Veale
Max Van Kleek
Reuben Binns

 
01 May 2018
Published in CHI 2018

Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions—like taxation, justice, and child protection—are now commonplace. How might designers support such human values?

We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and ‘discrimination-aware’ machine learning—absences likely to undermine practical initiatives unless addressed. We see design opportunities in this disconnect, such as in supporting the tracking of concept drift in secondary data sources, and in building usable transparency tools to identify risks and incorporate domain knowledge, aimed both at managers and at the ‘street-level bureaucrats’ on the frontlines of public service.

We conclude by outlining ethical challenges and future directions for collaboration in these high-stakes applications.

Equating users’ true needs and desires with behavioural measures of ’engagement’ is problematic. However, good metrics of ’true preferences’ are difficult to define, as cognitive biases make people’s preferences change with context and exhibit inconsistencies over time. Yet, HCI research often glosses over the philosophical and theoretical depth of what it means to infer what users really want.

In this paper, we present an alternative yet very real discussion of this issue, via a fictive dialogue between senior executives in a tech company aimed at helping people live the life they ‘really’ want to live. How will the designers settle on a metric for their product to optimise?