Skip to main content Skip to footer

Blog

How businesses can avoid the dark side of AI

3-MINUTE READ

December 5, 2022

Across the internet, in both paid and free versions, users can employ artificial intelligence to create magnificent pieces of art, songs in any genre and mood, and even give digital birth to characters who can ask and answer questions in a manner indistinguishable from actual humans. And as artificial intelligence begins to seep into all parts of our lives, it is redefining the traditional definition of “real” and exposing the risks of living in a world where the lines between human reality and technology are blurred. 

And yet, the benefits of AI are hard to ignore. For businesses, the opportunities are endless. Imagine being able to solve customer problems on a personalized, one-on-one basis -- but at scale.

While AI is still in the first stages of development, the opportunities -- and risks -- are already manifesting across the business world. Anyone who has texted a customer-service department of a large company has most likely engaged with an AI-capable computer -- at least for the initial conversation. Digital humanoids, whether disembodied voice or a full visual figure, provide customized responses and solutions relevant to the specifics of the individual inquiry, making the customer feel appreciated, which in turn promotes customer engagement and loyalty.

Today examples of Artificial Intelligence being used in businesses abound. In the Middle East, a telecommunications provider uses AI-driven virtual assistants—which can communicate in different Arab dialects as well as in English—to deftly handle some 1.65 million customer calls each month. A food-delivery service uses machine learning to anticipate food ordering trends for specific days and times, and to map out the most efficient delivery routes based on real-time traffic data. And a leading solar-panel installer uses deep-learning algorithms to assist customers to design their own systems, giving them an installation precisely suited to their needs and budget while adhering to the complex technical requirements of solar arrays.

As AI progresses, consumers will not care whether a customer service representative is real or digital, only that their needs or concerns are being addressed and resolved to their satisfaction. However, these applications, which employ thousands of algorithms evaluating millions of data points to make real-time decisions independent of human input or oversight, can expose companies to considerable levels of reputational and financial risk. So what can be done? To mitigate these risks organizations must implement Artificial Intelligence technology in concert with the principles and protocols of Responsible AI: Provenance, Purpose, Policy, and People.

In the simplest terms, provenance is to have a complete history, such as a ledger, that is available for examination. In the digital world, distributed-ledger technology (DLT), also known as blockchain, provides this provenance. Unlike conventional databases, DLT data is never overwritten, only added, and once recorded, cannot be changed or deleted with current technology. Information presented by an AI-generated humanoid -- especially if a deep fake representation of a prominent figure -- can be verified through tracing the blockchain back to the real-world source, or dismissed entirely if no such ledger exists. DLT is finding robust applications across the business world, from financial institutions to major news organizations. As IBM CEO Ginni Remetty famously noted: “What the internet did for communications, blockchain will do for trusted transactions.”

To consumers, purpose is the reason why a business takes a specific action. The perception of purpose can have significant impact on a company’s fortunes. A business that donates significant sums to charity without fanfare will be rewarded with customer loyalty, while a company that loudly trumpets such donations may be criticized for blatant brand-building.  

Purpose plays an important role in the metaverse as consumers must be assured that the business is providing these new digital products and services to enhance their experience, and not, for example, to save money and cut jobs. AI interfaces can deliver a highly professional style of engagement free of the unconscious biases or personality quirks of human operators -- especially important when dealing with intimate issues. However, an inappropriate use of customer-facing AI can erode confidence in the company as a whole, so the decision to automate user interfaces must prioritize serious consideration of purpose.

A focus on policy ensures that the digital experience provided by the business fits into the existing regulatory environment. Some governments now require companies to inform users when they are interacting with an AI-generated interface, especially if the engagement could result in a financial transaction. Regulatory bodies are also restricting the use of AI when dealing with vulnerable community members such as the elderly. Much of this policymaking is sailing uncharted waters, so individual companies are compelled to devise their own rules and procedures and are encouraged to share them with the wider business community.

Aligning corporate policy to government frameworks conveys authenticity and builds consumer trust. One example of this activism is the Veritas initiative launched by the Monetary Authority of Singapore, the nation’s central bank and financial regulator, and other financial institutions, to counter illegal and/or unethical use of AI in the finance industry. The initiative has produced a toolkit laying out detailed guidance on how AI itself can monitor AI-powered financial instruments to ensure they are fair, ethical, accountable, and transparent.

Last but not least, people are the final bastion of responsibility. As AI services proliferate through an organization, chains of responsibility can develop gaps, creating the potential for data breaches, customer-privacy violations, or even cyber attacks from competitors or hackers. Here, vertical and lateral communication across departments and management chains is needed to anticipate and devise procedures to deal with the unintended consequences of making an AI-algorithm the “face” of the company.  Novartis, for example, is developing accountability mechanisms in parallel with AI applications to keep the new technologies aligned with the company’s core values.  

In the coming years, the boundaries between “real” and “fake” will disappear almost entirely as users accessing the metaverse will be able to see, hear, touch, and even smell objects, unable to discern whether they are interacting with representations of physical objects or wholly digital artifacts. To reap real benefits from this “unreal world”, companies must incorporate responsible AI practices and policies throughout the organization to guard against risk and remain fully grounded in reality.

WRITTEN BY

Budiono

Managing Director – Artificial Intelligence Lead