This site is not optimized for Internet Explorer 9 and lower. Please choose another browser or upgrade your existing browser in order get the best experience of this website.
Release Notes | Changelog | Documentation
GraphGrid’s Connected Data Platform (CDP) v1.4 is our first full suite of essential connected data capabilities bundled into a developer-focused downloadable package. We’ve brought together all of the core graph database services and tools to set up your knowledge graph, get your data loaded in, connect it with context, and start working with it to uncover new insights.
Create and manage a custom graph model that meets your existing and future data requirements with a library of node types, edge types, properties, and constraints to define the structure of the graph data model.
Create and manage customized dynamic APIs that use Geequel to return a targeted result set of graph-based data. As new data is introduced, a Showme will incorporate that new data into results produced in real-time.
Chain requests following the execution of a Showme.
Chain Showmes and API endpoints in succession with data dependencies.
GraphGrid’s Robust and efficient search capabilities leverage Elasticsearch index policies and include the ability to perform searches within the UI.
Creation and management of index policies to be used by Search.
Enable (expose via API endpoints) to use Elastic scripts within Search.
*Enterprise edition ONLY*
Data extraction (or Annotating text) is the process by which text is turned into a graph structure. Nodes representing text to be processed are used by GraphGrid’s NLP to create a graphical representation of the original text. This representation inside the graph is the basis for all of NLP’s other language processing features.
*Ecommerce edition ONLY*
Similarity scoring is based on a term frequency-inverse document frequency (TF-IDF) method. Embed each document in a corpus based on their tf-idf vectors to enable similarity calculations between two documents. Term-frequency is calculated once per document and stored on the graph in HAS_TF relationships, while inverse-document frequency is calculated as needed.
A document’s summary is composed of the k most relevant sentences. A sentence’s relevancy is determined by the average of its word’s tf-idf score (for normalization).
Ability to take a document and form a story-arch: an ordered list of documents that come from one another.
Two documents are embedded into sentence vectors and compared for paraphrasing of one another.
Event-based processing of new nodes dropped directly onto the graph. Makes use of a message broker (RabbitMQ). Future plans include adding support for SQS and Kafka.
NLP interacts with an existing graph without disrupting original graph structure
Document cards provide summarization of an article to present details in UI.
Responds to changes in graph data and sends messages to a broker.
Keeps triggers in sync and active across databases.
Bi-directional movement of data between 2 instances to maintain consistency.
Offloads high volume data writing from the database to offer batch processing with error handling and retry capabilities.
Projects are a means to organize and segment multiple analysis areas of focus and allows users to create/share/delete graph projects.
Add/edit/delete nodes, properties, edges, and constraints in a graph database.
Navigate the graph and utilize other UI capabilities like Search, Showmes, and NLP.