Research Spotlight: A General Data Anonymity Measure

Bild der Pressemitteilung


A long-standing problem both within research and in society generally is that of how to analyze data about people without risking the privacy of those people. There is an ever-growing amount of data about people: medical, financial, social, government, geo-location, etc. This data is very valuable in terms of better understanding ourselves. Unfortunately, analyzing the data in its raw form carries the risk of exposing private information about people.

The problem of how to analyze data while protecting privacy has been around for more than 40 years—ever since the first data processing systems were developed. Most workable solutions are ad hoc: practitioners try things like removing personally identifying information (e.g. names and addresses), aggregating data, removing outlying data, and even swapping some data between individuals. This process can work reasonably well, but it is time-consuming, requires substantial expertise to get right, and invariably limits the accuracy of the analysis or the types of analysis that can be done.

A holy grail within computer science is to come up with an anonymization system that has formal guarantees of anonymity and provides good analytics. Most effort in this direction has focused on two ideas, K-anonymity and Differential Privacy. Both can provide strong anonymity, but except in rare cases neither can do so while still providing adequate analytics. As a result, common practice is still to use informal ad hoc techniques with weak anonymization, and to mitigate risk by for instance sharing data only with trusted partners.

The European Union has raised the stakes with the General Data Protection Regulation (GDPR). The GDPR has strict rules on how personal data may be used, and threatens large fines to organizations that do not follow the rules. However, GDPR says that if data is anonymous, then it is not considered personal and does not fall under the rules. Unfortunately, there are no precise guidelines on how to determine if data is anonymous or not. Member states are expected to come up with certification programs for anonymity, but do not know how to do so.

This is where we come in. Paul Francis‘ group, in research partnership with the startup Aircloak, has been developing an anonymizing technology called Diffix over the last five years. Diffix is an empirical, not a formal technology, and so the question remains „how anonymous is Diffix?“ While it may not be possible to precisely answer that question, one way we try to answer that question is through a bounty program: we pay attackers who can demonstrate how to compromise anonymity in our system. Last year we ran the first (and still only) bounty program for anonymity. The program was successful in that some attacks were found, and in the process of designing defensive measures, Diffix has improved.

In order to run the bounty program, we naturally needed a measure of anonymity so that we could decide how much to pay attackers. We designed a measure based on how much confidence an attacker has in a prediction of an individual’s data values, among other things. At some point, we realized that our measure applies not just to attacks on Diffix, but to any anonymization system. We also realized that our measure might be useful in the design of certification programs for anonymity.

We decided to develop a general score for anonymity, and to build tools that would allow anyone to apply the measure to any anonymization technology. The score is called the GDA Score, for General Data Anonymity Score.

The primary strength of the GDA Score is that it can be applied to any anonymization method, and therefore is apples-to-apples. The primary weakness is that it is based on empirical attacks (real attacks against real systems), and therefore the score is only as good as the attacks themselves. If there are unknown attacks on a system, then the score won’t reflect this and may therefore make a system look more anonymous than it is.

Our hope is that over time enough attacks will be developed that we can have high confidence in the GDA Score. Towards that end, we’ve started the Open GDA Score Project. This is a community effort to provide software and databases in support of developing new attacks, and a repository where the scores can be viewed. We recently launched the project in the form of a website, www.gda-score.org, and some initial tools and simple attacks. We will continue to develop tools and new attacks, but our goal is to attract broad participation from the community.

For more information, visit www.gda-score.org.