JOINT

Committee publishes evidence highlighting that public don’t know what happens with their data

19 June 2019

The UK Human Rights Committee, made up of MPs and Peers and Chaired by Harriet Harman MP, publishes written evidence submitted to its new inquiry into the right to privacy and the digital revolution.

This coincides with the first oral evidence session were the Committee will question witnesses about the risks and opportunities arising from the collection and aggregation of data, particularly by companies with data driven business models.

Witnesses

19 June 2019, Committee room 1, Palace of Westminster

At 3.15pm

  • Steve Wood, Deputy Commissioner (Policy), Information Commissioner's Office
  • Natasha Lomas, Editor, TechCrunch
  • Professor Orla Lynskey, Associate Professor of Law, Department of Law, London School of Economics
  • Antony Walker, Deputy Chief  Executive Officer, techUK

The Committee is also publishing the following written evidence submitted to its new inquiry from the following organisations:

Further evidence to the inquiry is also being published on the website this afternoon at 15.15.

The evidence demonstrates that:

People don’t know what happens to their data

Many submissions argue that the vast majority of individuals do not understand what happens to their data and therefore do not give meaningful consent when using online services.

Similarly, a recent report by Doteveryone showed 62% of people are unaware that social media companies make money by selling data to third parties, 47% feel they have no choice but to sign up to terms and conditions, even if they have concerns about them and 51% say they have signed up to services online without understanding terms and conditions, even after they have tried to read them.

Research by the Norwegian Consumer Council suggested that default settings on Facebook, Google and other social media encourage users to select more privacy invasive “options”, stating that “In our view, the question could be raised whether the default settings in the Facebook and Google popups are contrary to privacy by default and informed consent.” Privacy campaigners have argued that consumers should have the choice to opt into data tracking.

Profiling consumers

Combining data from different sources can enable private companies to build up a very detailed profile of individuals: not just shopping habits, but more sensitive information such as political and religious views, socio-economic status, sexual orientation and other details of their family life.

Privacy International told us in their submission:

“Companies routinely derive data from other data, such as determining how often someone calls their mother to calculate their credit-worthiness. As a result, potentially sensitive data can be inferred from seemingly mundane data, such as future health risks. Combined data can reveal peoples’ political and religious views; socioeconomic status; finances; spending habits; physical and mental health; relationships status; sexual preferences; family planning; internet browsing activities; and more. Combining data may expose patterns of behaviour people themselves are not aware of and highly sensitive information that they did not knowingly provide.”

Data collection and the ‘chilling effect’

Many submissions argued that the collection of data has a negative impact on other human rights such as freedom of speech and expression, with consumers much more likely to self-censor if they feel they being watched.

Liberty told us in their submission:

“That private companies exploit our data for commercial purposes is now a normalised part of our everyday existence. The data collected can reveal and manipulate our deepest and most sensitive thoughts and feelings – including our political views… The normalisation of these processes also threatens our freedom of expression and association by making it clear that we are being watched. Studies have shown that we are likely to censor what we post on social media or what we look up online when we are aware they are being surveilled.

Discrimination

Written submissions from the Information Commissioner’s Office, Liberty, Privacy International–state that there is growing evidence that inherent biases are built into algorithms resulting in the risk of discriminatory outcomes. For example research suggests that even where race is not explicitly considered, discrimination can still be inadvertent. Liberty’s submission quotes research of Goodman et all which states “if a certain geographic region has a high number of low income or minority residents, an algorithm that employs geographic data to determine loan eligibility is likely to produce results that are, in effect, informed by race and income.

Further information

Image: CC0

Share this page