Like all other categories of capital investment — machinery, plants, patented technology, unique processes — we should be investing in data to produce an asset with an expected return. But this isn’t how it works much of the time. Instead, companies invest to connect disparate systems. The “return” comes in the form of a data lake or maybe even a data warehouse.
We can do better. We can turn data into knowledge with the right platform.
How? When the relationships between different data elements are visible and easily understood, then data is connected and at that point becomes knowledge.
That knowledge asset — like a factory or a patented process — creates repeatable value over its useful life, which can last decades. Developing robust knowledge assets is the key to earning a return on your data.
In this post, we’ll define what it means to generate a return on data and show how GraphGrid makes it possible to earn that return by standing up a new knowledge asset from the ground up in as little as one day.
How to generate the greatest return on data
Let’s start with some core concepts, beginning with what we mean when we’re talking about the process of generating a return on your data.
While there is an inherent value to connecting different data sources, a blended pool of information doesn’t automatically confer knowledge. It’s better to think of connecting systems as a first step.
Knowledge requires understanding. In a networked system of different data sources, we’re after the context, distance, and structure of individual data elements. These are the features that describe how information may — or may not — be connected.
That’s important. Making connections in data is the foundational task of unearthing knowledge and the building blocks of a knowledge asset that produces a return. But if connected data is the atomic substrate of a knowledge asset, what are the fundamental elements of connected data? What does it look like?
Defining it simply, connected data is information that’s been enriched so that relationships are surfaced in order that they might be exploited. There are four basic elements that describe data in a networked environment:
- Nodes, which describe a person, place, or thing.
- Edges, which describe the relationships between nodes.
- Labels, which describe action that occurs as a result of these relationships.
- Properties, which describe useful context about those actions.
Knowledge graphs bring this connected data to life. In a knowledge graph, time and data reveal patterns more easily because the defining feature of the graph are the relationships captured within: the nodes, the edges, the labels, and the properties. The greater the depth of understanding of the relationships captured in an organization’s knowledge graph, the greater the potential return on data.
The durable value of knowledge graphs
Collaboration is key. Data engineers partner with business leaders to define the core relationships that confer value. Think of this process like mapping a new territory: every path that leads somewhere is defined and described. Teams add details over time, turning isolated pieces of raw information into a functional mosaic — a knowledge graph of connected data whereby the relationships present in a data set are clear, visible, and searchable.
The beauty of knowledge graphs is that they scale easily: new data can either adopt the basic relationships defined by the existing model or update the model to include new data elements. In this way, a knowledge graph allows us to map the structure and context of data as it’s loaded into a graph database.
For example, a knowledge graph that describes how a physician’s office serves patients will include all the people, places, and things that define that office (i.e. nodes), the relationships between those elements (i.e. edges), the action that occurs as a result of these relationships (i.e., labels), and useful context about those actions (i.e., properties). Data loaded into a knowledge graph is intrinsically contextual. So in the example above, the graph could show the doctors most frequently booked to administer prostate exams. Or skin cancer screenings. Or vaccines.
A knowledge asset is like a lever in that it can help an organization to use data to scale faster and more efficiently. The knowledge graph is the fulcrum that provides the leverage. Connected data forms the beam, and the more there is the sturdier the mechanism.
Why investing in knowledge assets accelerates business success
There are several potential practical benefits of having a rich, well-defined knowledge asset built on GraphGrid. Patterns in the data are easy to spot because the underlying knowledge graph has already defined the relationships. This, in turn, makes for fertile ground for training A.I. and machine learning algorithms and creates new opportunities for automating actions that once demanded high levels of human intervention.
Think about this in financial terms. Just as the team that puts capital to work investing in R&D yields new products that drive revenue-generating a return on the invested capital — the organization that leverages knowledge surfaced from data to make more decisions at greater speed and lower cost is generating a meaningful return on that data. Both are crucial to the long-term pursuit of sustainable growth.
Now let’s get practical. Once we’ve committed to investing in a knowledge asset, how do we go from whiteboard to code? GraphGrid is the answer. It offers a full-featured set of tools that make it possible to stand up all the elements of an enterprise knowledge asset in as little as one day. These tools include:
- Text-based search of the graph database in a single, common language. Continuous indexing supports real-time search results. The simpler the search process, the more likely it is that a wide range of users will take advantage of and benefit from the knowledge asset you’ve built.
- Showmes for creating dynamic, easily understandable queries that can be repeated over and over. Showmes do what they say: reveal patterns and answer questions worth asking more than once. In that sense, they make it easy to unearth the knowledge captured in the asset as it grows over time.
- Natural Language Processing (NLP) for turning any sort of text-based data into data that fits neatly into the knowledge graph. With NLP, documents, social media posts, news feeds, and more become sources of searchable, real-time insight.
- Fuze for integrating and synchronizing messaging between modules in GraphGrid. Fuze ensures the knowledge graph remains up-to-date as it ingests and processes new information.
GraphGrid provides these tools so that business unit leaders can work directly with data engineers in developing a new knowledge asset. No prior experience as a data analyst or knowledge graph query languages is required to create value.
In the case of one telemedicine provider, internal developers and business leaders teamed to stand up a knowledge asset that connects to a mobile app for capturing medical data. The resulting insights lead to improved care and better health outcomes. So far, the provider has committed a fraction of the budget that would be required to hire a team of consultants. Cost and treatment benefits should continue to compound as the app gets more use, connecting doctors and patients in more relevant ways.
For example a patient seeking clinical treatment for anxiety could be presented with a personalized diagnostic page that provides a more comprehensive approach to treatment than just finding a therapist — especially if knowledge surfaced in the data shows that, say, 30 minutes of weekly cardio may reduce symptoms by 50 percent or more. GraphGrid is enabling the provider to use knowledge to intervene early and effectively, expanding care and producing a return on data that benefits both the provider and its patients.
Get started now for free
On Sept. 15, 2021, GraphGrid unveiled the downloadable package in two editions: Enterprise and Ecommerce. Both editions include a full feature set. Enterprise edition includes Natural Language Processing (NLP) capabilities so you can turn text-based information into connected data. Ecommerce edition includes tools to create a smart shopping experience for customers and manage things like payment processing, invoicing, and order tracking.
The current 1.4 edition will be followed by a bi-annual release schedule. Current customers get 8 CPU cores, 32 GiB memory, and 1 GPU for all editions and their features which can be used through production for FREE.
Need more capacity or interested in having us run it in our cloud? Contact us now!