Reflections on UK IGF Panel Discussion: Privacy and Security in the Internet Age

by Robin Wilton

Technical Outreach Director – Identity and Privacy
Internet Society

1 – The “privacy versus security” fallacy

2 – What factors shape the privacy landscape

3 – An ISOC Trust and Identity perspective

1 – The “privacy versus security” fallacy

I’d like to start by quoting two remarks that were made at the recent WSIS Forum in Geneva on a similar panel to this one. The first is, in turn, a quotation from Joe Alhadeff of Oracle Corp., who said:

“It’s not enough to balance privacy against security: you have to optimise for both”.

The second is from Audrey Plonk of Intel Corp.:

“Privacy and security are mutually reinforcing, not mutually exclusive. In line with the principle of data minimisation, data controllers should ask what is the minimum data necessary to meet a given security objective.”

If you start by assuming that privacy and security form a “zero-sum game” (that is, that you can only increase one at the expense of decreasing the other), the result is a fundamental mis-framing of the question. Indeed, the logical extreme of this assumption is that, provided I am prepared to accept zero privacy, I can have 100% security, which is demonstrably nonsense. If I can’t communicate in privacy with my family, friends, bank or whoever, I am at risk from any bad actor in the system.

2 – What factors shape the privacy landscape

Here, I will group some brief remarks under four headings:

  • Principles
  • Policy
  • Commerce
  • Technology

Principles

We often hear the phrase “users are happy (or at least willing) to trade privacy for convenience”. That may be what we observe, but if it is true, it suggests there may be a problem at a deeper level in people’s decision-making. I’d suggest that, if people are willing to trade privacy for convenience, they may be underestimating what it is that is threatened by a loss of privacy.

If the trade-off were re-phrased as follows, people might re-evaluate their choice: “people want convenience above all, even at the cost of their autonomy, self-determination and agency”. And yet, in giving up our personal data online, we frequently forfeit both choice and consent. When we forfeit those, we are, effectively, losing some of our autonomy and self-determination. We are losing the ability to express our intentions and preferences, and see them fulfilled. We are losing agency.

Policy

So much could be said under this heading, but I will just make one general observation and give one cautionary example. The general observation is that value increases as we progress from data to information to knowledge, but legislation and regulation are still framed in terms of data protection – which suggests to me that they are lagging behind reality. We can already see cases where protecting data may be necessary, but it is not sufficient as a means of protecting privacy.

The cautionary example I will give relates to the UK care.data program for establishing a centralised database of electronic health records, by retrieving patient data from GPs’ surgeries.

The legislation for this program established it on an “opted-in by default” basis. Individuals could opt out, in writing, if they did not wish to participate – but if they did nothing, their records would automatically be regularly retrieved and retained.

However, it recently emerged that the agency with responsibility for collecting the data had not acted on any of the 700,000 opt-out instructions. Its explanation was that it had not had the resources to do so (although it had had the resources to process some 60 million sets of health data).

In terms of “privacy by design” principles, this policy failed in several respects:

  • It was designed as “opted in” rather than “opted out”.
  • Presumably for this reason, no provision was made for consent – only a poorly-communicated process for opting out.
  • The opt-out requests were not acted on.

Commerce

Looking at the continuing evolution of the Internet, one could argue that we have gone from the Industrial Age through the Information Age, and we are now in the Monetisation Age. A dominant factor (and very possibly the dominant factor) in the Internet’s evolution is the monetisation of personal data. This is such a powerful force that, like a spinning flywheel, it is practically impossible for an individual to slow it down or re-direct it.

This reflects the ongoing imbalance of power between individuals and service providers, particularly where the collection and processing of personal data are concerned.

Technology

The Internet (and the Internet of Things) fundamentally changes the way in which we think about assertions of identity. The traditional view of identity is that it is something conferred on you by a third party through a trustworthy process: you go through the process of establishing your uniqueness in a given population, and you are then issued with a credential which reflects that uniqueness, and which you can present later in support of your claim to be you. The classic examples of this are a passport, a driving license, or a user ID and password.

In each case, there is something about the credential that binds it – more or less reliably – to the individual. At each subsequent validation of the credential, we are depending on a “chain of trust” that leads back to the original trustworthy process.

But that isn’t how the Internet works. On the Internet, there may be no historical “trusted ceremony”; an individual might be identified by referring to multiple, low-trust pieces of data from multiple sources – rather than to a single, high-trust credential as in the traditional model. If you collect enough of the attributes that surround an individual, you can either tell that you are dealing with the same person as before, or work out who all the attributes belong to: who is at the centre of the cloud. This is still a historical model, in the sense that it is based on the user’s past behaviour, but it is less linear than the traditional model, and more likely to rely on multiple, rather than single sources.

If the metaphor for data monetisation is a flywheel, the metaphor for identity might be that of a gravity well. Either the traditional, credential-based method or the modern, attribute-based method accumulates a mass of data about the individual, creating a “gravity well” of identity data from which the user cannot easily escape. It might be a single, “high-mass” (i.e. high-trust) credential, or a cumulative multitude of smaller pieces of data, less trustworthy individually, but massive in aggregation. The term aggregation is significant, because there is a powerful economic market in the collection, aggregation and re-sale of masses of small items of personal data.

The implications of the Internet of Things for this model are profound. IoT promises to generate data about us on an unprecedented scale, and in unprecedented detail and intimacy. For organisations whose existing business model is based on the monetisation of personal data, the prospect is alluring: a massive increase in monetisable data, generated by devices over many of which the user has no explicit means of control. Conversely, the prospect for users is worrying, because it increases the power driving the monetisation flywheel, at the same time as reducing their ability to express or enforce their preferences regarding the use of the data.

An ISOC TID perspective

Identity, privacy and security are related factors underpinning our trust in the Internet… but there are others. Our team is participating in a “vectors of trust” (VoT) discussion group, under the aegis of the IETF, to examine and describe these various trust factors, and the ways in which trust is affected by their presence or absence.

We are also looking at the factors involved in users’ privacy decisions. In any decision-making cycle, human beings go through several phases, usually including contemplation, enquiry, evaluation, decision, action and reinforcement. This process may or may not be conscious, in part or in whole. Our hypothesis is that users can be encouraged to form better and more sustainable privacy habits, but that this is likely to require different forms of intervention at different stages of the cycle.

And we are researching the principles and practice of ethical data-handling, with a view to producing practical guidance that takes data controllers beyond current risk- and compliance-based approaches, to an approach that captures the real reason why good data-handling is desirable.

As Michelle Dennedy (CPO, Intel Security) puts it:

“If you focus on compliance, you will fail.

If you focus on people, you will succeed.”

Leave a Reply

Scroll to Top