Krishna Gummadi awarded ERC Advanced Grant

Bild der Pressemitteilung


Krishna Gummadi, head of the MPI-SWS Networked Systems group, has been awarded an ERC Consolidator Grant. Over the next five years, his project „Foundations of Fair Social Computing“ will receive 2.49 million euros, which will allow the group to develop the foundations for fair social computing in the future.

In the most recent round for Advanced Grants, a total of 2,167 research proposals were submitted to the ERC out of which merely 12% were selected for funding. The sole selection criterion is scientific excellence.

Summary of the Fair Social Computing project proposal

Social computing represents a societal-scale symbiosis of humans and computational systems, where humans interact via and with computers, actively providing inputs to influence—and in turn being influenced by—the outputs of the computations. Social computations impact all aspects of our social lives, from what news we get to see and who we meet to what goods and services are offered at what price and how our creditworthiness and welfare benefits are assessed. Given the pervasiveness and impact of social computations, it is imperative that social computations be fair, i.e., perceived as just by the participants subject to the computation. The case for fair computations in democratic societies is self-evident: when computations are deemed unjust, their outcomes will be rejected and they will eventually lose their participants.

Recently, however, several concerns have been raised about the unfairness of social computations pervading our lives, including

  1. the existence of implicit biases in online search and recommendations,
  2. the potential for discrimination in machine learning based predictive analytics, and
  3. a lack of transparency in algorithmic decision making, with systems providing little to no information about which sensitive user data they use or how they use them.

Given these concerns, we need reliable ways to assess and ensure the fairness of social computations. However, it is currently not clear how to determine whether a social computation is fair, how we can compare the fairness of two alternative computations, how to adjust a computational method to make it more fair, or how to construct a fair method by design. This project will tackle these challenges in turn. We propose a set of comprehensive fairness principles, and will show how to apply them to social computations. In particular, we will operationalize fairness, so that it can be measured from empirical observations. We will show how to characterize which fairness criteria are satisfied by a deployed computational system. Finally, we will show how to synthesize non-discriminatory computations, i.e., how to learn an algorithm from training data that satisfies a given fairness principle.