Three approaches to mitigating hidden bias in machine learning

Examples

A paper by Michael Veale (UCL) and Reuben Binns (Oxford), "Fairer Machine Learning in the Real World: Mitigating Discrimination Without Collecting Sensitive Data", proposes three potential approaches to deal with hidden bias and unfairness in algorithmic machine learning systems. Often, the cause is biases in the historical data used to train these systems. In the first approach, trusted third parties selectively store the data needed to audit systems and discover discrimination and incorporate fairness constraints while preserving privacy. In the second, collaborative online platforms allow diverse organisations to share knowledge and promote fairness. These could be relatively open and trust-based, like Wikipedia, or be controlled and verified  by third-party gatekeepers such as NGOs or sectoral regulators. In the third, intended for situations where data on protected characteristics is difficult to obtain such as systems judging people via environmental sensors, unsupervised learning might provide an idea of the structure of correlations within the data that can be used to build hypotheses about fairness. These proposals will need further study in real-world settings.


https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3060763
 

See more examples
Related learning resources