Profile cover photo
Profile photo
Sorelle Friedler
About
Sorelle Friedler's posts

Post has shared content

Post has shared content

Post has shared content

Post has shared content
Pretending that the human bias has been replaced by objectivity is even worse.
I have mixed feelings about such efforts. Removing human bias is good. But replacing it with another form of bias that we understand even less is not great. 

Post has shared content

Post has attachment

Post has shared content
See me on Al Jazeera English Tuesday Jul 21 at 1:30pm MT. 

If you need some entertainment around 1:30pm Mountain time on Tuesday Jun 21, you can watch me attempt to not make a fool of myself on Al Jazeera's show The Stream while talking about algorithms that might or might not discriminate.

Post has attachment

Post has attachment
We hope you can join us for this upcoming workshop!

ICML Workshop on Fairness, Accountability, and Transparency in Machine Learning
Saturday, July 11th, 2015 - Lille, France
www.fatml.org

This interdisciplinary workshop will consider issues of fairness, accountability, and transparency in machine learning. It will address growing anxieties about the role that machine learning plays in consequential decision-making in such areas as commerce, employment, healthcare, education, and policing.

Invited Speakers:
Nick Diakopoulos --- Algorithmic Accountability and Transparency in Journalism
Sara Hajian --- Discrimination- and Privacy-Aware Data Mining
Salvatore Ruggieri --- Privacy Attacks and Anonymization Methods as Tools for Discrimination Discovery and Fairness
Toshihiro Kamishima and Kazuto Fukuchi --- Future Directions of Fairness-Aware Data Mining: Recommendation, Causality, and Theoretical Aspects

Accepted Papers:
Muhammad Bilal Zafar, Isabel Valera Martinez, Manuel Gomez Rodriguez, and Krishna Gummadi --- Fairness Constraints: A Mechanism for Fair Classification
Benjamin Fish, Jeremy Kun, and Ádám D. Lelkes --- Fair Boosting: A Case Study
Zubin Jelveh and Michael Luca --- Towards Diagnosing Accuracy Loss in Discrimination-Aware Classification: An Application to Predictive Policing
Indrė Žliobaitė --- On the Relation between Accuracy and Fairness in Binary Classification

Closing Panel Discussion:
Fernando Diaz, Sorelle Friedler, Mykola Pechenizkiy, Hanna Wallach, and Suresh Venkatasubramanian (Moderator)

Looking forward to seeing you in Lille!

The organizing committee,
Solon Barocas (General Chair), Princeton University
Sorelle Friedler (Program Chair), Haverford College
Moritz Hardt, Google
Josh Kroll, Princeton Unviersity
Carlos Scheidegger, University of Arizona
Suresh Venkatasubramanian, University of Utah
Hanna Wallach, Microsoft Research and University of Massachusetts Amherst

Post has shared content
Do you want machine learning to be fair ? Accountable to more than its masters ? And transparent for all to understand and interpret ? Submit your abstracts TODAY for the ICML workshop on Fairness, Accountability and Transparency in Machine Learning (FATML) !
Wait while more posts are being loaded