Driving better decisions with knowledge graphs
June 23, 2021
Knowledge graphs are an important and highly practical emerging technology. That’s why Accenture sponsored the Knowledge Graph Conference. We also organized the second annual Knowledge Graphs for Social Good workshop in collaboration with Lambert Hogenhaut, Chief of Data Analytics for the United Nations.
Knowledge graph technologies offer a way to semantically represent and connect concepts in various domains. What does that mean? Knowledge graphs are a tool that help companies connect the dots – or more accurately, connect their data. They help resolve big enterprise challenges like data silos, tracking lineage and domain data mapping. With these challenges tamed, companies can put their data to better use in decision-making.
KGC showcases the latest advances in knowledge graph implementations and applications. We delivered a keynote talk at the conference on accelerating industry data integration. We covered our approach in building a knowledge graph that can improve the data supply chain with data profiling and mapping of siloed data sources. We can also put this type of knowledge graph to use as a catalog for the data mesh, a new paradigm shift in data platforms (more on that later).
Building on last year's workshop and our ongoing efforts to advance sustainability through technology, we also organized the second Knowledge Graphs for Social Good workshop. This year’s workshop discussed the use of knowledge graphs in applications ranging from preserving indigenous culture to pursuing gender equality. Tina Comes, Associate Professor at Delft University of Technology, delivered a keynote talk on “Data for Good – A Resilience Perspective.”
This talk focused on building a resilient environment. Dr. Comes showed how knowledge graphs can be used to help respond to natural disasters, understanding local policies by enabling information sharing among communities and across organizations. She also suggested using knowledge graphs to build a digital twin that represents key elements in risk management and disaster resilience, to predict trends in data and identify the sources of problems.
Other talks showed how to build a knowledge graph solution with some very low-resource and low-connectivity hardware (for example, based on Raspberry Pi devices). This offers a way to bring the power of knowledge graph approaches to rural areas that may lack sophisticated tech infrastructure. Attendees also discussed how diverse traditional food recipes can be connected to various cultures, and how knowledge graphs can support building these connections for making more culturally aware intelligent agents. The workshop talks and discussions continued to show how knowledge graphs can contribute to ensuring fairness, sustainability, and transparency in automated systems.
Throughout the larger KGC conference, data mesh architectures came up frequently, and for good reason. Just as knowledge graphs are an evolution of master data management, the data mesh is an evolution of what people call data lakes. It's a paradigm shift in analytical data architecture: moving from everything-in-one-place monolithic approaches to a distributed architecture. And considering the very distributed nature of most large companies, the data mesh is a much more practical approach. This new architecture saves a lot of time that data scientists previously had to spend understanding, locating, and cleaning up the data they would need for a given use.
There are four “ingredients” to the data mesh approach: decentralizing the data responsibility to domains serving it; treating data as a product; making self-serve infrastructures and building federated data governance.
These four “ingredients” are a big part of why knowledge graphs and the data mesh approach complement one another. We mentioned treating data as a product. Data products are the main component of the data mesh approach. They can be produced either closer to raw sources (for example, data coming from a wearable device like a smartwatch), or they can be more insight-oriented, derived from other data products (and therefore further from the original sources).
In either case, though, these data products need to follow the quality metrics that make them user-friendly for their consumers. They should be complete, clean, documented, and accompanied with the code that maintains and serves them. The semantics locally defined for data products should be linked with other data products in a standard way – and that's only possible with the knowledge graph emerging from data mesh. That’s why we build machine learning approaches to make the data products more contextualized and connected with each other.
Whether it’s for broader goals around social good or for specific industry use cases, knowledge graphs provide a path toward better data-driven decision-making. They’re the backbone of digital twins – another key technology that we’re exploring at Labs – powering both data integration and analytics. We’ve also used knowledge graphs to build a scriptless conversation platform, developed tools to extract causal graphs from unstructured text data, and developed a graph-based platform to discover and mitigate active cyberthreats.
Where will knowledge graphs drive innovation next? Watch this space to find out!
To learn more about the knowledge graph conference, visit the KGC website. For more information about the Knowledge Graphs for Social Good workshop, read the United Nations report on the workshop and watch the video recording. Details of our work on accelerating data integration is also available to watch.