We program algorithms to give us exactly what we have asked for, so we shouldn’t be surprised when they do.
None of the issues mentioned in this article are inherent with machine learning algorithms themselves. Instead, issues arise from the way they interact with society and the unintended consequences that can result from those interactions. As such, putting the ethical implications at the heart of the development of each new algorithm is vital.
One way to ensure this is by embracing public health models of governance, which treat issues as indicative of underlying drivers, rather than problems to be solved per se. Another would be to ensure algorithms can be adapted more readily to newer or better data, in ways that do not exaggerate historical patterns. We see this every day in the way AI at Spotify or Amazon quickly adapts recommendations to our latest searches.
Finally, targeted research identifying individual problems and solutions is critical to the success of any effort to create more ethical AI. We need to see more resources—and more senior leadership attention—directed at ensuring algorithms do not have negative impacts on individuals or society. Just as data privacy and cyber security have moved from departmental to board-level issues, responsible governance of AI must be quickly elevated in importance by all organizations that use it.