The economic potential of AI is unquestionable, but it is largely unrealized by organizations, which is astonishing 87% of AI projects it won’t work.
Some see it as a technology problem, others as a business problem, a cultural problem, or an industry problem, but the latest evidence shows that it is a problem.
According to the latest research, almost two thirds of senior management claim that trust in AI drives revenue, competitiveness and customer success.
Trust is a difficult word to decipher when it involves artificial intelligence. Can an AI system be trusted? If so, how? We don’t trust people immediately, and we’re even less more likely to trust artificial intelligence systems immediately.
VB event
AI Impact Tour: AI Audit
Ask for an invitation
However, a lack of trust in AI limits economic potential, and many recommendations for building trust in AI systems have been criticized as being too abstract or far-reaching to be practical.
It’s time for a recent “AI Trust Equation” focused on practical application.
The AI trust equation
The trust equation, the concept of building trust between people, was first proposed by David Maister, Charles Green and Robert Galford. The equation is Trust = Trustworthiness + Reliability + Intimacy divided by self-orientation.
At first glance, you’ll be able to see why this is the perfect equation for building trust between humans, but it doesn’t translate to building trust between humans and machines.
To build trust between humans and machines, the recent AI trust equation is Trust = Security + Ethics + Accuracy divided by Control.
Security is the first step on the path to trust and consists of several key concepts that are well described elsewhere. Building trust between humans and machines comes right down to the query: “Will my information be safe if I share it with this AI system?”
Ethics is more complicated than security because it is a moral issue, not a technical one. Before investing in an AI system, leaders must consider:
- How were people treated during the creation of this model, e.g Kenyan employees in creating ChatGPT? Is this something that I/we feel comfortable supporting, building our solutions out of?
- Is the model explainable? If this causes harmful effects, can I understand why? Can I do something about it (see Control)?
- Is there implicit or explicit bias in the model? This is a thoroughly documented problem, e.g Shades of gender research by Joy Buolamwini and Timnit Gebru, and Google’s recent try to eliminate biases in their models that have led to ahistorical biases.
- What is the business model of this AI system? Are those whose information and life’s work trained the model compensated when the model built on their work generates income?
- What are the values of the company that created this AI system, and to what extent are the actions of the company and its leadership consistent with these values? For example, OpenAI’s recent decision to mimic Scarlett Johansson’s voice without her consent demonstrates the significant difference between OpenAI’s stated values and Altman’s decision to disregard Scarlett Johansson’s decision to refuse to make use of her voice in ChatGPT.
Accuracy could be defined as the reliability with which an AI system provides accurate answers to a series of questions during operation. This could be simplified to: “When I ask this AI a question based on my context, how useful is its answer?” The answer is directly related to 1) the sophistication of the model and 2) the data it was trained on.
Control is at the heart of the conversation about trust in AI, and it goes back to the most tactical query: “Will this AI system do what I want it to do, or will it make a mistake?” to one of the most pressing questions of our time: “Will we ever lose control of intelligent systems?” In each cases, the ability to regulate the actions, decisions and outcomes of AI systems is at the heart of the concept of their trust and implementation.
5 steps to using the AI trust equation
- Determine whether the system is useful: Before investing time and resources in determining whether an AI platform is trustworthy, organizations will profit from determining whether the platform is useful, helping them create more value.
- Check if the platform is secure: what happens to your data if you upload it to the platform? Does any information leave the firewall? Working closely with your security team or hiring security advisors is crucial if you desire to ensure you’ll be able to rely on the security of your AI system.
- Determine your ethical threshold and evaluate all systems and organizations against it: If any models you invest in have to be explainable, define with absolute precision a common, empirical definition of explainability across the organization, with upper and lower tolerable limits and a proposed measure of systems in towards these limitations. Do the same for any ethical principle your organization deems non-negotiable when it involves the use of AI.
- Set your accuracy goals and don’t stray from them: It could be tempting to adopt a system that does not perform well because it’s a precursor to human work. However, if the results are below the accuracy goal that you simply have defined as acceptable for your organization, you risk poor quality work results and greater workload for your employees. Most often, low accuracy is either a model problem or a data problem, and each could be addressed with the right level of investment and focus.
- Decide how much control your organization needs and the way it is defined: The degree of control you would like decision makers and operators to have over AI systems will determine whether you would like a fully autonomous, semi-autonomous, AI-based, or Your organization’s tolerance level for sharing control with AI systems is higher than any current AI system.
In the age of artificial intelligence, it could be easy to seek out best practices or quick wins, but the truth is: no one has all of it discovered yet, and when they do, it won’t be special to you anymore and neither will your organization.
So as a substitute of waiting for the perfect solution or following the trends set by others, take the initiative. Gather a team of champions and sponsors inside your organization, tailor the AI trust equation to your specific needs, and begin evaluating AI systems against it. The advantages of such an undertaking are not only economic, but also of fundamental importance for the way forward for technology and its role in society.
Some technology firms recognize that market forces are moving in this direction and are working to develop appropriate obligations, controls and visibility into the performance of their AI systems – resembling the case of Salesforce Einstein’s trust layer — and others argue that any level of visibility would mean a lack of competitive advantage. You and your organization might want to determine how much confidence you desire to have in each the results of AI systems and the organizations that create and maintain them.
The potential of AI is enormous, but it’s going to only be realized when AI systems and the individuals who create them are in a position to gain and maintain trust in our organizations and society. The way forward for AI depends on it.
“
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is a place where experts, including data scientists, can share data-related insights and innovations.
If you desire to read about progressive ideas and current information, best practices and the future of knowledge and data technologies, join us at DataDecisionMakers.
You might even consider writing your individual article!