We are entering a world of synthetic realness, where AI-generated data convincingly reflects the physical world. In this world of synthetic data, images and chatbots, spoofing and fakes, we face questions: What’s real, what’s not and perhaps more importantly, when do we care?
Synthetic realness can push AI to new heights in healthcare. Synthetic data can represent patient datasets for use in research, training or other applications. Synthetic content, such as AI-generated text, video and audio, could be used to counter malicious deepfakes and misinformation in healthcare by spreading truth from trusted sources to counter damage bad actors.
Indiscernible fakes exist, so as synthetic realness progresses, we must focus on authenticity. We’ll begin to evaluate "Is this authentic?” based on four primary tenets:
Provenance – what is its history?
Policy – what are its restrictions?
People – who is responsible?
Purpose – what is it trying to do?
That said, using these technologies pushes healthcare into controversial terrain. It raises tough questions about how to use generative AI in an authentic way within the context of bad actors using these same technologies to create deepfakes and disinformation that undermine trust.
Indiscernible fakes exist, so as synthetic realness progresses, we must focus on authenticity.
The analysis: The rise of the unreal world
Healthcare enterprises are becoming architects of the unreal world. First, synthetic data is being used to train AI models in ways that real-world data practically cannot or should not. This realistic (yet unreal) data can be shared, maintaining the same statistical properties while protecting confidentiality and privacy. It can also be made to have increased diversity to counter bias, thus overcoming the pitfalls of real-world data.
A second use of synthetic data is to be more human-like for creation and interaction. In people’s everyday lives, chatbots and virtual assistants are increasingly common and convenient – and new technologies could make them more realistic than ever. However, while pursuing these technologies in healthcare we must remember: Just because we can use AI, doesn’t mean we should.
of healthcare executives agree that their organizations are committed to authenticating the origin of their data and genuine use of AI.
Things to look out for: Bad actors in the unreal world
Currently, the biggest threat for healthcare organizations in the synthetic world are the actors who use it maliciously. Researchers have found that deep fakes can easily infiltrate medical imagery, such as MRIs, X-ray scans and CT scans. They found that deceptive attackers can easily surpass systems with outdated security to add or remove tumors on imagery. The potential results? False detection of tumors and misdiagnoses that lead to lack of treatment or unnecessary treatment.5
These bad actors could also come across as regular citizens sharing information via social media through synthetic content. Deepfakes could have significant negative impact on patient outcomes. If healthcare consumers are duped by synthetic content, they’ll blame the healthcare organization – or they won’t become patients in the first place. Lives are at stake as are the enterprises’ reputations.
Actions to take: Elevating authenticity
Using generative AI in an authentic way means taking heed of provenance, policy, people and purpose. By observing these four tenets, healthcare organizations can gain confidence not only in their decisions to trust others but also in their use of generative AI such that others can trust them.
One way to verify the provenance of digital content and identity – thereby demonstrating authenticity – is through use of distributed ledger technology (DLT). No matter what technologies you use, establishing provenance will be critical as your organization increasingly deals with potential deepfakes and disinformation.
Prepare to deal with the challenges that arise with the use of AI. Take stock of the policies your business must adhere to with respect to generative AI specifically. Much of this space is yet to be defined, so where there isn’t guidance, you’ll need to define your own policies based on your services, products, customers and values.
Having governance structures in place is imperative. Decide who is responsible for having these tough conversations and what committees are drafting internal policies. Who will be held accountable if privacy is compromised? Who will be the point person responsible if your organization falls prey to a deepfake or disinformation attack?
Define the purpose behind the use of synthetic data and content. What are the key metrics that can demonstrate the advantage of synthetic over non-synthetic content? For instance, if your organization uses a chat bot just to cut costs, there’s a good chance it’s not living up to its intended purpose of serving people.
As areas like generative AI progress and models improve, healthcare enterprises are building the unreal world. But whether we use synthetic data and content in ways to improve the world or fall victim to malicious actors is yet to be determined. Most likely, we will land somewhere in the expansive in-between, and that’s why elevating authenticity within your healthcare organization is so important. Authenticity is the compass and the framework that will guide use of AI in a genuine way—across industries, use cases and time—by considering provenance, policy, people and purpose. Ultimately, it will unlock new attitudes toward and experiences with AI, unleashing the benefits of the unreal world.