Research about bias/prejudice/discrimination in data mining

this is the instructor direction for research he give us an example of in video and article and he wants another example for using of bias/prejudice/discrimination in data mining :Fighting Bias in Algorithms (TedX) – watch this video as it is related to the discussion for this week: http://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms.html

Data mining algorithms are only as good as the data that is supplied to them – GIGO – Garbage In, Garbage Out as the saying goes. It is becoming more and more evident that the biases and prejudices that exist in our society are also now becoming embedded in AI applications. Read the following article on the problems with facial recognition and what Joy Buolamwini, a MIT Ph.D. student, describes as the need for “algorithmic accountability.” She founded the Algorithmic Justice League as a result of her research.

Please research and find another example of bias/prejudice/discrimination in data mining. Provide a 1-2 paragraph summary and provide the URL link. Post your own thoughts on how we can combat bias creeping into the algorithms we train to predict.

Don’t forget the requirement for responding least 1 classmate’s posting.

You must start a thread before you can read and reply to other threads

this is the article example : https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html