Modern software is full of examples of bias. The IEEE/ACM International Workshop on Software Fairness (FairWare 2018) invites academics, practitioners, and policy makers interested in the software engineering aspects of fairness in software to contribute and attend.

What is fairness testing and what real-world problems does it solve? Watch this video (or read this paper)

Schedule

08:45 – 09:00 Message from the FairWare 2018 Chairs (pdf) (slides)
Yuriy Brun (University of Massachusetts Amherst)
09:00 – 09:45
  • Keynote: Follow the data! Algorithms and systems for responsible data science (slides)+
    Julia Stoyanovich (Drexel University)

    Abstract: In this talk, I will give an overview of the Data, Responsibly project that frames managing data in accordance with ethical and moral norms, and legal and policy considerations, as a core system requirement. I will highlight some of our recent algorithmic advances, and will discuss the over-all framework in which responsibility properties are managed and enforced through all stages of the data lifecycle. I will motivate and ground our technical work in the #NYCAlgorithmicTransparency effort, which was spurred by a recently passed New York City Algorithmic Transparency Law.

    Bio: Julia Stoyanovich is an Assistant Professor of Computer Science at Drexel University, and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University. Before joining Drexel in 2012, Julia was a postdoctoral researcher and an NSF/CRA Computing Innovations Fellow at the University of Pennsylvania. Julia's research focuses on management and analysis of preference data, on querying large evolving graphs, and on responsible data management and analysis practices: operationalizing fairness, diversity, transparency, and data protection in all stages of the data acquisition and processing lifecycle. She established the Data, Responsibly consortium, co-organized a Dagstuhl seminar by the same name, served on the ACM task force to revise the Code of ethics and professional conduct, and is active in the New York City algorithmic transparency effort. Julia's research has been supported by the US National Science Foundation, the US-Israel Bi-national Science Foundation (BSF), and by Google. She is a recipient of an NSF CAREER Award.

09:50 – 10:10
  • A roadmap for ethics-aware software engineering (pdf) (slides)+
    Fatma Başak Aydemir and Fabiano Dalpiaz (Utrecht University)

    Abstract: Today's software is highly intertwined with our lives, and it pos- sesses an increasing ability to act and influence us. Besides the renown example of self-driving cars and their potential harmfulness, more mundane software such as social networks can introduce bias, break privacy preferences, lead to digital addiction, etc. Additionally, the software engineering (SE) process itself is highly affected by ethical issues, such as diversity and business ethics. This paper introduces ethics-aware SE, a version of SE in which the ethical values of the stakeholders (including developers and users) are captured, analyzed, and reflected in software specifications and in the SE processes. We propose an analytical framework that assists stakeholders in analyzing ethical issues in terms of subject (software artifact or SE process), relevant value (diversity, privacy, autonomy, ...), and threatened object (user, developer, ...). We also define a roadmap that illustrates the necessary steps for the SE research and practice community in order to fully realize ethics-aware SE.

10:10 – 10:30
  • Classification with probabilistic fairness guarantees +
    Philip Thomas and Stephen Giguere (University of Massachusetts Amherst)

    Abstract: We design a machine learning algorithm which guarantees that, with high probability, it will not exhibit unfair behavior. Importantly, here the users of our algorithm are free to use their preferred definitions of fairness. This allows our approach to benefit from and extend existing methods for defining and quantifying bias in software. To validate our method, we apply it to real data and show how different notions of fairness can be enforced.

10:30 – 11:00
Break
11:00 – 11:45
  • Keynote: Program fairness through the lens of formal methods+
    Aws Albarghouthi (University of Wisconsin—Madison)

    Abstract: Software has become a powerful arbitrator of a range of significant decisions with far-reaching societal impact—hiring, welfare allocation, prison sentencing, policing and, among many others. With the range and sensitivity of algorithmic decisions expanding by the day, the problem of understanding the nature of program discrimination, bias and fairness is a pressing one. In this talk, I will describe our work on the FairSquare project, in which we are developing program verification and synthesis tools aimed at rigorously characterizing and reasoning about fairness of decision-making programs.

    Bio: Aws Albarghouthi is an assistant professor of computer science at the University of Wisconsin—Madison. He studies software verification and synthesis, recently with a focus on fairness and privacy.

11:50 – 12:10
  • Integrating social values into software design patterns (pdf) (slides)+
    Waqar Hussain, Davoud Mougouei, and Jon Whittle (Monash University)

    Abstract: Software Design Patterns (SDPs) are core solutions to the recurring problems in software. However, adopting SDPs without taking into account their value implications may result in breach of social values and ultimately lead to user dissatisfaction, lack of adoption, and financial loss. An example is the airline system that overcharged people who were trying to escape from the Hurricane Irma. Although not intentional, oversight of social values in the design of the airline system resulted in significant customer dissatisfaction and loss of trust. To mitigate such value breaches in software design we propose taking social values into account in SDPs explicitly. To achieve this, we outline a collaborative framework that allows for (i) specifying the value implications of SDPs, (ii) developing or extending SDPs for integrating social values, (iii) providing guidance on the value-conscious adoption of design patterns, (iv) collect- ing and analyzing insights from collaborators, (v) maintaining an up-to-date library of the valued design patterns, and (vi) incorporating lessons learned from the real-world adoption of the valued design patterns into the proposed framework for its continuous improvement in integrating social values into software.

12:10 – 12:30
  • Fairness definitions explained (pdf) (slides)+
    Sahil Verma (IIT Kanpur India) and Julia Rubin (University of British Columbia)

    Abstract: Algorithm fairness has started to attract the attention of researchers in AI, Software Engineering and Law communities, with more than twenty different notions of fairness proposed in the last few years. Yet, there is no clear agreement on which definition to apply in each situation. Moreover, the detailed differences between multiple definitions are difficult to grasp. To address this issue, this paper collects the most prominent definitions of fairness for the algorithmic classification problem, explains the rationale behind these definitions, and demonstrates each of them on a single unifying case-study. Our analysis intuitively explains why the same case can be considered fair according to some definitions and unfair according to others.

12:30 – 13:50
Lunch
14:00 – 14:45
  • Keynote: Counterfactual reasoning in algorithmic fairness (slides)+
    Ricardo Silva (University College London)

    Abstract: We propose that intuitive notions of fairness must be formalized within a causal model: ultimately, we state that discrimination with respect to a demographic group happens if belonging to the latter is a cause of the former. In particular, the counterfactual "had I been from a different group, my outcome would have been different" is taken as the central motivation. We discuss how to cast assumptions about the world and society in a causal language developed in the artificial intelligence and statistics literature, and how this relates to existing (non-causal) notions of fairness. Problems of prediction and action are discussed, as well as how to mitigate the reliance on the strong assumptions necessary for counterfactual modeling.

    Joint work with Matt Kusner, Chris Russell, and Joshua Loftus.

    Bio: Ricardo Silva is a Senior Lecturer (Associate Professor) in the Department of Statistical Science, UCL, a Turing Fellow in the Alan Turing Institute, and Adjunct Faculty of the Gatsby Computational Neuroscience Unit, UCL. Prior to this position, Ricardo got his PhD in the Machine Learning Department, Carnegie Mellon University, in 2005. He also held postdoctoral positions at the Gatsby Unit, and at the Statistical Laboratory, University of Cambridge.

14:50 – 15:10
  • Model-based discrimination analysis: A position paper (pdf) (slides)+
    Qusai Ramadan, Amir Shayan Ahmadian, Daniel Strüber, Jan Jürjens, and Steffen Staab (University of Koblenz-Landau)

    Abstract: Decision-making software may exhibit biases due to hidden dependencies between protected characteristics and the data used as input for making decisions. To uncover such dependencies, we propose the development of a framework to support discrimination analysis during the system design phase, based on system models and available data.

15:10 – 15:30
  • Avoiding the intrinsic unfairness of the trolley problem (pdf)+
    Tobias Holstein (Mälardalen University) and Gordana Dodig Crnkovic (Chalmers University of Technology)

    As an envisaged future of transportation, self-driving cars are being discussed from various perspectives, including social, economical, engineering, computer science, design, and ethical aspects. On the one hand, self-driving cars present new engineering problems that are being gradually successfully solved. On the other hand, social and ethical problems have up to now being presented in the form of an idealized unsolvable decision-making problem, the so-called “trolley problem”, which is built on the assumptions that are neither technically nor ethically justifiable. The intrinsic unfairness of the trolley problem comes from the assumption that lives of different people have different values.

    In this paper, techno-social arguments are used to show the infeasibility of the trolley problem when addressing the ethics of self-driving cars. We argue that different components can contribute to an "unfair" behaviour and features, which requires ethical analysis on multiple levels and stages of the development process. Instead of an idealized and intrinsically unfair thought experiment, we present real-life techno-social challenges relevant for the domain of software fairness in the context of self-driving cars.

15:30 – 16:00
Break
16:00 – 17:00 Panel: Ansgar Koene (University of Nottingham) (slides), Julia Stoyanovich (Drexel University) (slides), and Yuriy Brun (University of Massachusetts Amherst)
  • IEEE P7003 standard for algorithmic bias considerations (pdf)+
    Ansgar Koene (University of Nottingham), Liz Dowthwaite (University of Nottingham), and Suchana Seth (Harvard University)

    The IEEE P7003 Standard for Algorithmic Bias Considerations is one of eleven IEEE ethics related standards currently under development as part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. The purpose of the IEEE P7003 standard is to provide individuals or organizations creating algorithmic systems with development framework to avoid unintended, unjustified and inappropriately differential outcomes for users. In this paper, we present the scope and structure of the IEEE P7003 draft standard, and the methodology of the development process.

17:00 – 17:15
Closing remarks
17:30 – 20:00
Reception (Congress Foyer): Mingle to the future: Automotive evening


Not presented: On Fairness in continuous electronic markets by Hayden Melton (Thomson Reuters).

FairWare 2018 keynote speakers:

Aws Albarghouthi

University of Wisconsin-Madison

Julia Stoyanovich

Drexel University

Ricardo Silva

University College London
Who should attend? You! We welcome attendees who want to learn about cutting-edge work on software fairness and who want to be part of the conversation as this important research field gains momentum. You don't have to be a researcher, and if you are, you don't have to have worked on this problem before. The primary goal of FairWare 2018 is to draw attention the important problem of fairness in software and spark research in this under-explored area. As a result of the workshop, we will compile a research roadmap paper for the community interested in advancing fairness. This roadmap will be based on discussions in the workshop, and will serve as a guide for researchers to join and advance the community's mission.

FairWare organization is supported partially by the National Science Foundation under Grant No. CNS-1744471. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the organizers and do not necessarily reflect the views of the National Science Foundation.