Sherlock Holmes sent you a friend request

On digital user-tracking

Henri Brebant
5 min readNov 2, 2018

Disclaimer: This blog post was written as an assignment for DPI-662 ‘Digital Government’ class at the Harvard Kennedy School of Government

In this article, I will discuss how Facebook collects data on our behaviours. However I consent to it, an inalienable feeling of discomfort persists, because I fear that my data can anytime be misused by intentioned agents.

1. On intentionality

Imagine that you live in the XIXth century. In the streets of London or Vienna, young boys sell the latest newspapers. One follows you to your house and tells The Observer where you live and whether you like Pouchkine or Shakespeare. With the help of an address book and theatre tickets, zealot journalists could probably identify who you are. So what? Unless the boy reveals your love affairs or divulgates negative misinformation about you, you would probably do not care. But if you were to spot Sherlock Holmes’ trench-coat at your doorstep, spying on your every moves for what you think is an investigation to make the neighborhood safer, you would probably feel uncomfortable with someone analyzing your behaviour. The key difference between the boy and Sherlock Holmes is the intention that goes with data collection and whether it can be used to infringe either or intimacy or our freedom. We generate data as we breathe, but data analysis and interpretation is where privacy can be breached, and for that you need a human intention to begin with. So to what extent shall we accept privacy invasion and how does it look like in the digital world?

2. The Facebook time bomb

Facebook collects data at different imbricated layers:

  • all the private data I provide: profile information, email, birthday, religious beliefs, gender, contacts (if I upload them from a phone)…
  • the data I generate by interacting with the Facebook platform and its related services: connections, messages, likes, time spent reading pages, transactions on the platform (including credit card number),
  • the data I generate by interacting with external platforms for which I use the Facebook log-in (like Spotify)
  • the metadata associated to my Internet usage: dates and times I engage with certain activities, geolocation, devices and networks I use, cookies….
  • the inward data that I do not generate but that others do about me: time when they comment pictures I am in..

In short, Facebook knows everything. As data is social, Facebook derives value by aggregating it with 2.1 bn other users, and training constantly machine learning algorithms.

Among key usages we count:

  • Personalization and improvement of services which I benefit from, especially by tailoring the NewsFeed page, allowing for facial recognition on pictures.
  • The sale of services to external counterparts that leverage my personal data, especially with FacebookAds which allow brands to push me targeted content based on my profile and cookies, and analyze how I responded. The Ads can also be political, and be based therefore on how Facebook algorithms appreciates my political sensibility. Other products like Facebook Pixel allow retailers to track my behavior on their own websites, and associate it with the ads I had been targeted by. In particular, developers of external platforms can receive my Facebook data (likes, comments on their page). Basically, Facebook serves as giant personal identifiers depository to which platforms can connect to retarget their services.

To Facebook, I can contest the neutral stance on ads policies- a more ethical platform would probably discriminate more strictly political and unethical ads. I can also protest against the proliferation of non-social content like kitty videos, which NewsFeed highlights because they are viral (hence making me spend more time on the platform) — some of this content being also pushed by malicious websites. But here is not my biggest fear for privacy.

My fear is that Facebook is a constant time bomb that can be instrumentalized by Sherlock Holmes-like, by really intentioned actors, and technology would be a facilitator of harm. Take for example my geolocation history for the day of May 2nd, 2016. Crossing simple mapping data with my frequent geolocation history, any mal-intentioned actor could understand that I went to work at 9:30, before going to my client for a meeting and then to my client’s project place. My Facebook account is a gateway to uncovering sensitive business information.

3. So what: discomfort exists, but it is not all black and white

To summarize, Facebook sells my personal data, Apple does not. In both cases, it is a digital contract I have with these firms, and which I am supposed to be conscious of under GDPR philosophy. To summarize, my discomfort on the loss of privacy stems from:

  • The loss of anonymity — hyper-personalization equates identifiability. If you fear to risk your anonymity at any time, you feel restricted in your freedom to express yourself online;
  • The loss of intimacy — if data is collected all the time, then you no longer have the “right to be let alone” which has been linked to privacy in the aftermath of the Industrial Revolution (see Brandeis & Warren in 1890);
  • The loss of control — if all your data is stored indefinitely (because it is so cheap), great uncertainty exists on how our your data will be used or transferred. OECD recommends that only necessary data should be collected.
  • The loss of autonomy — Certain content can modify significantly your behaviours and opinions (political ads for example), and you have little control over the algorithmic regulation which arbitrates for pushing you certain types of content;
  • The loss of secrecy — a machine will not uncover secrets, but human intentionally can, like a Sherlock Holmes could do with my Facebook location history.
  • The loss of integrity — as highlighted in section 2., my biggest fear is that an intentioned Sherlock Holmes conflates the analysis of my personality traits (targeting, predictive behavior) with my identity. All in all, it is the association of private data analysis with my identity that truly violates my privacy.

4. What can we do about it?

Three actions can be outlined to offset negative feelings about user tracking:

  • Discomfort can be mitigated with greater control and empowerment over my own data. This is at the core of GDPR philosophy (ownership and portability of personal data), which is increasingly appraised by tech giants.
  • Fear of misuse by intentioned agents requires to strengthen security and break down user databases, so as to separate identifying data from behavioral data.
  • Greater transparency shall be given to the user of the frequency of his data usage like how many ads targeted him.

--

--