Knowledge Connections

Executive Roundtable on Knowledge Graph Adoption in the Enterprise

November 30, 2020 | Connected Data London 

The Enterprise Knowledge Graph Foundation organized two roundtable sessions designed to explore the business process and implementation issues associated with knowledge graph adoption.  This is our summary of the conversation.


Moderator: Michael Atkin, Principal, and Director, Enterprise Knowledge Graph Foundation


Business Adoption Panel: how EKG gets embedded into an organization 

  • Teresa Tung, Managing Director, Accenture Labs

  • Steven Gustafson, CTO, Noonum

  • Tom Plasterer, Director of Bioinformatics, Data Science & AI, AstraZeneca 

  • Bethany Sehon, Director, Metadata & Semantics, Enterprise Data Management, Capital One

Implementation Panel: how EKG gets orchestrated in complex operational environments

  • Ben Gardner, Solution Architect - Knowledge Management, AstraZeneca

  • Laurent Alquier, Manager, Emerging Technologies Partnerships, Johnson & Johnson

  • Katariina Kari, Knowledge Engineer, Zalando SE

  • Natasa Varytimou, Director, Data Architect, UBS


Meeting Hypothesis


The EKG equation starts with the recognition that data incongruence is a liability (a real problem).  We know the problem is solvable and that it is easier to solve with semantic technology than it is with conventional technology.  We also know that orchestrating transitions of this magnitude is difficult.  Making the leap from “columns” to “concepts” is achievable, but not automatic.  Unraveling existing infrastructures that serve mission-critical applications is expensive and not realistic.  And we know that leadership is required to overcome organizational inertia and facilitate change.


We (the community of data practitioners) are enamored with the potential of knowledge graphs because of the capabilities provided by semantic standards.  We know that SME knowledge can be precisely modeled.  We know that these models can be expressed in standards to achieve “unambiguous shared meaning.”  We know that quality is based on mathematical axioms (structural validation).  We know that access can be controlled at a datapoint level (lineage, provenance and entitlement) - and that we can unshackle ourselves from the rigid schemas that characterize our conventional environment.  


We are approaching a new dawn of content interoperability that will enhance organizational productivity.   The data incongruence problem is solvable and will be a revolution for the management of knowledge in an interconnected world.   What is required to expedite adoption.


Business Adoption Panel


  • First topic: cognition and whether stakeholders really understand and buy into the need to solve problems of data incongruence. 

    • Stakeholders are aware of the overall data challenge (and its importance)

    • The knowledge graph champion is now more business driven with an increasing focus on the value of data insight to the organization (quick, flexible and real-time information).  

    • There is an emerging “data centric” mindset and a “data first” value proposition that is derived from the business challenge of the user. 

  • Second topic: maturity of knowledge graph initiatives using the EKG Foundation’s Maturity Model  [Level 1 = pilots and POCs, building initial team, demonstrate capabilities, project funding … Level 2 = extensible platform, shared ontologies for related use case, LOB funding … Level 3 = enterprise platform for mission-critical applications, connected inventory, CoE for governance coordination]

    • Knowledge graph perspective has shifted from “academic” to “applications.”  

    • Participants have moved up the maturity curve from low level 1 (applications-centric) to high level 2 (related use cases/knowledge graph formality).  

    • Beware of the “lone champion” - jeopardizes the knowledge graph journey when they leave the firm (this is minimized if the use case is tied directly to revenue).

  • Third topic: driving use cases and what firms are really doing with knowledge graph

    • There is no real killer app for knowledge graph, but most are focused on the big three of connected inventory, and data Integration and flexible query.  

    • These are mostly “control focus” applications, while the business wants value from the data itself (this is the meaning of data centricity).

    • That’s because there is an investment to get the KG operational.  But once the foundation is established, it is a lot easier to add additional use cases.  So, this is really a case of getting over the “semantic hump”

  • Fourth topic: the underlying foundational capabilities of web-based standards

    • Value is derived from the self-describing nature of the knowledge graph (standards for meaning and identity, concept reuse and datapoint authorization).  Self-describing data adds to “democratization.”  

    • Message from everyone is don’t focus on the ontology, focus on business value.   


Implementation Panel 


  • First topic: the relationship of the knowledge graph to the business user (what’s important in terms of adoption). 

    • Tools for non-data scientists are increasingly critical to the success of knowledge graph initiatives.  

    • Data visualization is particularly important (even more critical than the capabilities of the EKG).  

    • Users are focused on business value and don’t really care (or need to understand) semantic technology or the concepts of ontology.

  • Second topic: the battle for mindshare within the organization and how KG works with existing infrastructure.

    • Knowledge graph is complementary to existing data infrastructures.  This is not about replacing infrastructure (data lakes and repositories).  

    • Overcoming the ‘relational masses” is hard (skill sets are missing) and making the transition from “columns” to “concepts” is difficult for most of the existing technology world.  

    • The knowledge engineer is critical to the success of EKG and must become a master of EKG technology.  


  • Third topic: the reality of managing the data pipeline. 

    • ETL is still required (and difficult) but the challenge has shifted from data pipeline management to semantic engineering and the alignment of meaning.  There is a shortage of semantic engineers.  

    • Three data management delusions: “single version of truth” (multi-versions are reality; context is critical); one canonical data model (not feasible, too complex to build and maintain); requirements-led design (hard-wired dependencies are the opposite of reusability)

  • Fourth topic: how governance in a knowledge graph environment is different from governance for core data management.

    • Governance has shifted away from heavy organizational structure and MDM.  Data governance is no longer viewed as a “forced mandate” (toward data centricity).  

    • The knowledge graph simplifies governance and shifts the discussion toward new governance requirements (IRI naming conventions, conceptual modeling, meaning resolution, ontology management, data integration, DataOps/testing).