Source: Information Management | May 17, 2019 Author: Sanjay Srivastava For most large enterprise leaders, the question of applying Artificial Intelligence (AI) to transform their business is not a question of if, but when.
Almost all Fortune 500 companies are seeing applications of AI that will fundamentally change the way they manufacture products, deliver goods and services, hire employees or delight customers.
As AI becomes increasingly involved in our personal and professional lives, governments and enterprises alike have started to take steps to provide an ethical framework for the use of AI, such as the American AI initiative, the Algorithmic Accountability Act in the US and the EU guidelines for evaluating AI applications in areas such as fairness, transparency, security, and accountability.
All of these initiatives underscore the need for enterprises to establish their own ethical frameworks for AI.
Such frameworks ensure that AI continues to lead to the best decisions, without unintended consequences or misuse of data and analytics.
Ethical use can help build trust between consumers and organizations, which benefits not only AI adoption, but also brand reputation.
In the development of ethical frameworks for AI, we need to factor in the following principles: Intended use One of the most important questions to ask when developing an AI application is, “Are we deploying AI for the right reasons?”.
You can use a hammer to build a house or you can use it to hit someone.
Just like a hammer, an AI tool is neither good nor bad.
It’s how you use it that can become a problem.
AI can do a lot of good- it can improve and speed up the decision-making process for approving a loan, an insurance claim or a hire, which leads to more positive customer experiences.
In a previous article, I discussed how HR departments can use AI to review job descriptions to prevent bias and be more inclusive in the hiring process.
Enterprises should incorporate an initial ethical evaluation of the intended use before rolling out the AI initiative, as well as continuously monitor these models to make sure we don’t deviate towards unethical uses.
The intended use, as well as relevant data used to feed algorithms and outcomes, should also be fully transparent to the people impacted by the machines’ recommendations.
In California, a new law will go into effect in July 2019 that states chatbots must disclose that they are an automated system to avoid misleading users.
Beyond simple disclosure, explainability is increasingly required, especially in regulated industries.
Go to the full article Share this:Click to share on Twitter (Opens in new window)Click to share on Facebook (Opens in new window)MoreClick to share on LinkedIn (Opens in new window)Click to share on Reddit (Opens in new window).