There certainly are some limitations to human decision making, and when seeding with human decisions, the models don’t have the guile to hid the statistics of those decisions. In this example, the AI pointed out that it is statistically significant that Amazon’s decision to hire is based heavily on gender – even going down to scoring women’s only colleges as undesirable.
In addition, it sounds like this model that Amazon built for recruitment wasn’t successfully able to provide qualified candidates for positions. I’m surprised that this was rolled out across recruiters at all. I know of a statistician who would testify in court in the 1970s – 2000s about whether such things were statistically significant in corporate decisions for discriminatory cases. This algorithm produced is really no different in terms of how discrimination was measured, but it’s hard for me that this wasn’t caught before a recommendations engine was given to recruiters. Obviously, this is not the story that Amazon would like leaked under any circumstances.
Lessons to learn here:
It’s important to be aware of that when building models to mimic current human decisions that human discrimination might be part of the model.Validate that what your model is using to make recommendations fits within your plans for the future before rolling out to people to assist in recommendations.
Comments