Towards Ethical Machine Learning

We must understand this data in order to digress from these biases, rather than perpetuating them.One step to reduce biases in machine learning algorithms is through Algorithmic Impact Assessments (AIAs), as proposed by New York University’s AI Now Institute..AIAs extend from the idea that the “black box” methodology leads to a vicious cycle, continuously moving further away from understanding these algorithms and diminishing the ability to address any issues that may arise from them..AI Now suggests the use of AIAs to handle the use of machine learning in the public realm, creating a set of standard requirements ..Through AIAs, AI Now aims to provide clarity to the public by publicly listing and explaining algorithm systems used while allowing the public to dispute these systems, developing an audit and assessment processes, and increasing public agencies’ internal capabilities to understand the systems they use.Similar to the use of AIAs to promote transparency in machine learning, the Defense Advanced Research Projects Agency (DARPA) suggests Explainable Artificial Intelligence (XAI) as a part of the solution..As expressed in the below graphic, XAI strives to produce more explainable models that users can understand and trust.Though there do not yet appear to be any clear, concise descriptions of XAI on DARPA’s website, they say XAI prototypes are constantly tested, having had the goal for Phase 1 system evaluations to be completed this past November..On the website, they also state, “At the end of the program, the final delivery will be a toolkit library consisting of machine learning and human-computer interface software modules that could be used to develop future explainable AI systems”AIAs and XAI are only two examples in which organizations are working towards more ethical, transparent machine learning models..As machine learning continues to grow at an exploding rate, there will certainly be more ideas introduced to ensure such regulation..Regardless of the specifics behind these ideas, it is important to maintain a system of transparency and comprehension surrounding machine learning models, during which the data is scrutinized throughout all stages of the machine learning process to ensure fair practices that do not perpetuate biases.Sources:The New Science of SentencingInteractive graphics by Matthew Conlen, Reuben Fischer-Baum and Andy Rossback Additional research by Hayley Munguia…www.themarshallproject.orgShould Prison Sentences Be Based On Crimes That Haven't Been Committed Yet?This story was produced in collaboration with The Marshall Project..Criminal sentencing has long been based on the…fivethirtyeight.comYes, U.S..locks people up at a higher rate than any other countryJuly 7, 2015 "It's a stark fact that the United States has less than 5 percent of the world's population, yet we have…www.washingtonpost.comA Guide to Solving Social Problems with Machine LearningIt's Sunday night..You're the deputy mayor of a big city..You sit down to watch a movie and ask Netflix for help…hbr.orgBail Reform and Risk Assessment: The Cautionary Tale of Federal SentencingAcross the country, from New Jersey to Texas to California, bail reform is being debated, implemented, and litigated at…harvardlawreview.orgPretrial Risk Assessment Now Available to All Interested Jurisdictions; Research Advisory Board…NEW YORK – The Laura and John Arnold Foundation (LJAF) is expanding access to a suite of resources that will help…www.arnoldfoundation.orgAlgorithmic Risk Assessments and the Double-Edged Sword of Youth by Megan T.. More details

Leave a Reply