FairWare 2018 keynote speakers announced:

Aws Albarghouthi

University of Wisconsin-Madison

Julia Stoyanovich

Drexel University

Ricardo Silva

University College London

Modern software is full of examples of bias. The IEEE/ACM International Workshop on Software Fairness (FairWare 2018) invites academics, practitioners, and policy makers interested in the software engineering aspects of fairness in software to contribute and attend.

What is fairness testing and what real-world problems does it solve? Watch this video (or read this paper)
Who should attend? You! We welcome attendees who want to learn about cutting-edge work on software fairness and who want to be part of the conversation as this important research field gains momentum. You don't have to be a researcher, and if you are, you don't have to have worked on this problem before. The primary goal of FairWare 2018 is to draw attention the important problem of fairness in software and spark research in this under-explored area. As a result of the workshop, we will compile a research roadmap paper for the community interested in advancing fairness. This roadmap will be based on discussions in the workshop, and will serve as a guide for researchers to join and advance the community's mission.


08:45 – 09:00 Welcome
Yuriy Brun (University of Massachusetts Amherst)
09:00 – 09:45
  • Keynote: Follow the data! Algorithms and systems for responsible data science+
    Julia Stoyanovich (Drexel University)

    Abstract: In this talk, I will give an overview of the Data, Responsibly project that frames managing data in accordance with ethical and moral norms, and legal and policy considerations, as a core system requirement. I will highlight some of our recent algorithmic advances, and will discuss the over-all framework in which responsibility properties are managed and enforced through all stages of the data lifecycle. I will motivate and ground our technical work in the #NYCAlgorithmicTransparency effort, which was spurred by a recently passed New York City Algorithmic Transparency Law.

    Bio: Julia Stoyanovich is an Assistant Professor of Computer Science at Drexel University, and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University. Before joining Drexel in 2012, Julia was a postdoctoral researcher and an NSF/CRA Computing Innovations Fellow at the University of Pennsylvania. Julia's research focuses on management and analysis of preference data, on querying large evolving graphs, and on responsible data management and analysis practices: operationalizing fairness, diversity, transparency, and data protection in all stages of the data acquisition and processing lifecycle. She established the Data, Responsibly consortium, co-organized a Dagstuhl seminar by the same name, served on the ACM task force to revise the Code of ethics and professional conduct, and is active in the New York City algorithmic transparency effort. Julia's research has been supported by the US National Science Foundation, the US-Israel Bi-national Science Foundation (BSF), and by Google. She is a recipient of an NSF CAREER Award.

09:50 – 10:10 A roadmap for ethics-aware software engineering
Fatma Başak Aydemir and Fabiano Dalpiaz (Utrecht University)
10:10 – 10:30 Classification with probabilistic fairness guarantees
Philip Thomas and Stephen Giguere (University of Massachusetts Amherst)
10:30 – 11:00
11:00 – 11:45
  • Keynote: Program fairness through the lens of formal methods+
    Aws Albarghouthi (University of Wisconsin—Madison)

    Abstract: Software has become a powerful arbitrator of a range of significant decisions with far-reaching societal impact—hiring, welfare allocation, prison sentencing, policing and, among many others. With the range and sensitivity of algorithmic decisions expanding by the day, the problem of understanding the nature of program discrimination, bias and fairness is a pressing one. In this talk, I will describe our work on the FairSquare project, in which we are developing program verification and synthesis tools aimed at rigorously characterizing and reasoning about fairness of decision-making programs.

    Bio: Aws Albarghouthi is an assistant professor of computer science at the University of Wisconsin—Madison. He studies software verification and synthesis, recently with a focus on fairness and privacy.

11:50 – 12:10 Integrating social values into software design patterns
Waqar Hussain, Davoud Mougouei, and Jon Whittle (Monash University)
12:10 – 12:30 Fairness definitions explained
Sahil Verma (IIT Kanpur India) and Julia Rubin (University of British Columbia)
12:30 – 13:50
14:00 – 14:45
  • Keynote: Counterfactual reasoning in algorithmic fairness+
    Ricardo Silva (University College London)

    Abstract: We propose that intuitive notions of fairness must be formalized within a causal model: ultimately, we state that discrimination with respect to a demographic group happens if belonging to the latter is a cause of the former. In particular, the counterfactual "had I been from a different group, my outcome would have been different" is taken as the central motivation. We discuss how to cast assumptions about the world and society in a causal language developed in the artificial intelligence and statistics literature, and how this relates to existing (non-causal) notions of fairness. Problems of prediction and action are discussed, as well as how to mitigate the reliance on the strong assumptions necessary for counterfactual modeling.

    Joint work with Matt Kusner, Chris Russell, and Joshua Loftus.

    Bio: Ricardo Silva is a Senior Lecturer (Associate Professor) in the Department of Statistical Science, UCL, a Turing Fellow in the Alan Turing Institute, and Adjunct Faculty of the Gatsby Computational Neuroscience Unit, UCL. Prior to this position, Ricardo got his PhD in the Machine Learning Department, Carnegie Mellon University, in 2005. He also held postdoctoral positions at the Gatsby Unit, and at the Statistical Laboratory, University of Cambridge.

14:50 – 15:10 Model-based discrimination analysis: A position paper
Qusai Ramadan, Amir Shayan Ahmadian, Daniel Strüber, Jan Jürjens, and Steffen Staab (University of Koblenz-Landau)
15:10 – 15:30 Avoiding the intrinsic unfairness of the trolley problem
Tobias Holstein (Mälardalen University) and Gordana Dodig Crnkovic (Chalmers University of Technology)
15:30 – 16:00
16:00 – 16:20 On Fairness in continuous electronic markets
Hayden Melton (Thomson Reuters)
16:20 – 17:30 Panel:
IEEE P7003 standard for algorithmic bias considerations

Ansgar Koene (University of Nottingham), Liz Dowthwaite (University of Nottingham), and Suchana Seth (Harvard University)
17:30 – 17:45
Closing remarks

FairWare organization is supported partially by the National Science Foundation under Grant No. CNS-1744471. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the organizers and do not necessarily reflect the views of the National Science Foundation.