avatar

by dexters1

Community Servers

What is

cognee - memory layer for AI apps and Agents

GitHub forks GitHub stars GitHub commits Github tag Downloads License Contributors

AI Agent responses you can rely on.

Build dynamic Agent memory using scalable, modular ECL (Extract, Cognify, Load) pipelines.

More on use-cases.

Features

  • Interconnect and retrieve your past conversations, documents, images and audio transcriptions
  • Reduce hallucinations, developer effort, and cost.
  • Load data to graph and vector databases using only Pydantic
  • Manipulate your data while ingesting from 30+ data sources

Get Started

Get started quickly with a Google Colab notebook or starter repo

Contributing

Your contributions are at the core of making this a true open source project. Any contributions you make are greatly appreciated. See CONTRIBUTING.md for more information.

๐Ÿ“ฆ Installation

You can install Cognee using either pip, poetry, uv or any other python package manager.

With pip

pip install cognee

๐Ÿ’ป Basic Usage

Setup

import os
os.environ["LLM_API_KEY"] = "YOUR OPENAI_API_KEY"

You can also set the variables by creating .env file, using our template. To use different LLM providers, for more info check out our documentation

Simple example

This script will run the default pipeline:

import cognee
import asyncio


async def main():
    # Add text to cognee
    await cognee.add("Natural language processing (NLP) is an interdisciplinary subfield of computer science and information retrieval.")

    # Generate the knowledge graph
    await cognee.cognify()

    # Query the knowledge graph
    results = await cognee.search("Tell me about NLP")

    # Display the results
    for result in results:
        print(result)


if __name__ == '__main__':
    asyncio.run(main())

Example output:

  Natural Language Processing (NLP) is a cross-disciplinary and interdisciplinary field that involves computer science and information retrieval. It focuses on the interaction between computers and human language, enabling machines to understand and process natural language.
  

Graph visualization: Open in browser.

For more advanced usage, have a look at our documentation.

Understand our architecture

Demos

  1. What is AI memory:

Learn about cognee

  1. Simple GraphRAG demo

Simple GraphRAG demo

  1. cognee with Ollama

cognee with local models

Code of Conduct

We are committed to making open source an enjoyable and respectful experience for our community. See CODE_OF_CONDUCT for more information.

๐Ÿ’ซ Contributors

Star History

Star History Chart

Leave a Comment

Comments section will be available soon. Stay tuned!

Frequently Asked Questions

What is MCP?

MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications, providing a standardized way to connect AI models to different data sources and tools.

What are MCP Servers?

MCP Servers are lightweight programs that expose specific capabilities through the standardized Model Context Protocol. They act as bridges between LLMs like Claude and various data sources or services, allowing secure access to files, databases, APIs, and other resources.

How do MCP Servers work?

MCP Servers follow a client-server architecture where a host application (like Claude Desktop) connects to multiple servers. Each server provides specific functionality through standardized endpoints and protocols, enabling Claude to access data and perform actions through the standardized protocol.

Are MCP Servers secure?

Yes, MCP Servers are designed with security in mind. They run locally with explicit configuration and permissions, require user approval for actions, and include built-in security features to prevent unauthorized access and ensure data privacy.