The future is bright, the future is AI or so the headlines would like us to believe. There is no doubt that AI is going to bring amazing advances to medicine and many other sectors. One of the best stories I have heard is the App that scans moles and using a database of thousands of scans and their diagnosis data to predict which moles are high risk.
When it comes to recruiting we have a major challenge to overcome and that is all about where the data AI uses comes from. Biased data in simply means that as the machine learning takes place it keeps refining the flaws and accentuating the error as Google found out recently. There was no intent but the start point was wrong.
This article would appear to suggest that one of the root causes is the lack of diversity in the group of AI designers that create projects and that would suggest that there is a long way to go before we can see truly unbiased algorithms in the hiring world.
AI Now wrote: “Large-scale AI systems are developed almost exclusively in a handful of technology companies and a small set of elite university laboratories, spaces that in the West tend to be extremely white, affluent, technically oriented and male.” The rise of cases of AI bias is not surprising. Amazon was forced to scrap a resumé-scanning tool that downgraded CVs containing the names of all-female universities, or even the word “women’s”. This was not malice, but it gets to the crux of the issue. Most AI is really machine learning — mathematical models developed by machines to pick out patterns from vast piles of data. These models can be then used for specific, narrow tasks, such as predicting when a jet engine will fail, or whether a person is likely to develop Alzheimer’s.