-
Notifications
You must be signed in to change notification settings - Fork 49
update README #125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
update README #125
Conversation
Add quick start example for existing knowledge graphs
WalkthroughREADME.md expanded with three documentation sections: Environment Configuration, Quick Start with an existing FalkorDB knowledge graph, and Creating Knowledge Graphs from Scratch. Content covers environment variable setup, ontology extraction, model configuration (LiteModel via KnowledgeGraphModelConfig), KnowledgeGraph instantiation, chat session initiation, and code samples. No code changes. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Dev as Developer
participant Env as Env Vars
participant SDK as GraphRAG SDK
participant DB as FalkorDB
participant LLM as LLM Provider
rect rgba(230,245,255,0.5)
note over Dev,Env: Quick Start (Existing KG)
Dev->>Env: Set FALKORDB_* and LLM key
Dev->>SDK: Initialize FalkorDB client & select graph
SDK->>DB: Connect and open graph
Dev->>SDK: Extract ontology from KG
SDK->>DB: Read schema/metadata
DB-->>SDK: Ontology data
Dev->>SDK: Configure KnowledgeGraph (LiteModel via ModelConfig)
SDK->>LLM: Initialize/validate model (if needed)
Dev->>SDK: Start chat_session & send message
SDK->>DB: Retrieve relevant graph context
SDK->>LLM: Generate response with graph context
LLM-->>SDK: Response
SDK-->>Dev: Chat reply
end
sequenceDiagram
autonumber
actor Dev as Developer
participant SDK as GraphRAG SDK
participant Src as Data Sources
participant Store as Disk
participant DB as FalkorDB
participant LLM as LLM Provider
rect rgba(240,255,240,0.5)
note over Dev,LLM: From-Scratch Flow
Dev->>SDK: Ontology.from_sources(Src)
SDK->>Src: Ingest & analyze sources
SDK-->>Dev: Ontology
Dev->>Store: Save ontology
Dev->>SDK: Configure model (LiteModel/openai) & KnowledgeGraph(ontology)
SDK->>LLM: Initialize model
Dev->>SDK: Start chat_session & query
SDK->>DB: (Optional) Persist/query graph if created
SDK->>LLM: Generate response using ontology/graph
LLM-->>SDK: Response
SDK-->>Dev: Chat reply
end
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
README.md (5)
121-128
: Add dependency note for python-dotenv used in the snippetThe example imports load_dotenv but the installation step doesn’t mention python-dotenv. Without it, the sample fails.
### Step 1: Creating Ontologies Automate ontology creation from unstructured data or define it manually - See [example](https://github.com/falkordb/GraphRAG-SDK/blob/main/examples/trip/demo_orchestrator_trip.ipynb) +> Prerequisite for this snippet: `pip install python-dotenv` + ```python from dotenv import load_dotenvIf you prefer, I can open a follow-up PR adding python-dotenv (and any optional model backends such as litellm) to the Dependencies section. Do you want me to draft that?
177-177
: Correct grammar in user-facing textReplace “for start a conversation” with “to start a conversation.”
-At this point, you have a Knowledge Graph that can be queried using this SDK. Use the method `chat_session` for start a conversation. +At this point, you have a Knowledge Graph that can be queried using this SDK. Use the method `chat_session` to start a conversation.
210-210
: Fix broken anchor link to Step 1The link points to #how-to-use. The Step 1 section anchor is #step-1-creating-ontologies.
-See the [Step 1](#how-to-use) section to understand how to create Knowledge Graph objects for the agents. +See the [Step 1](#step-1-creating-ontologies) section to understand how to create Knowledge Graph objects for the agents.
309-324
: Fix syntax error: missing comma before host in KnowledgeGraph(...)There’s a missing trailing comma after qa_prompt; this will cause a syntax error in the sample.
kg = KnowledgeGraph( name="kg_name", model_config=KnowledgeGraphModelConfig.with_model(model), ontology=ontology, cypher_system_instruction=cypher_system_instruction, qa_system_instruction=qa_system_instruction, cypher_gen_prompt=cypher_gen_prompt, cypher_gen_prompt_history=cypher_gen_prompt_history, - qa_prompt=qa_prompt + qa_prompt=qa_prompt, host="127.0.0.1", port=6379, # username=falkor_username, # optional # password=falkor_password # optional )
212-251
: Add missing import forKGAgent
The Agents example in the README uses
KGAgent
but does not import it. (The other classes—KnowledgeGraph
,KnowledgeGraphModelConfig
, andLiteModel
—are already imported earlier in the document.) Please add the following import immediately before the snippet so that it runs standalone:### Agents ```python +from graphrag_sdk.agents.kg_agent import KGAgent # Define the model model = LiteModel(model_name="openai/gpt-4.1")
This ensures readers can execute the example without encountering a
NameError
when instantiatingKGAgent
.
🧹 Nitpick comments (5)
README.md (5)
45-45
: Fix heading level to satisfy markdownlint (MD001) and improve structureUnder the H1 “How to use,” this should be an H2, not H3.
-### Environment Configuration +## Environment Configuration
156-173
: Make Step 2 snippet self-contained or call out prerequisitesThis block references model and sources, which are defined in Step 1. Readers who jump here may hit NameError. Either restate those variables or add a clear note.
```python # After approving the ontology, load it from disk. ontology_file = "ontology.json" with open(ontology_file, "r", encoding="utf-8") as file: ontology = Ontology.from_json(json.loads(file.read())) +# Assumes `model` and `sources` were defined in Step 1 above. +# If running this cell independently, define them before continuing. kg = KnowledgeGraph( name="kg_name", model_config=KnowledgeGraphModelConfig.with_model(model), ontology=ontology, host="127.0.0.1", port=6379, # username=falkor_username, # optional # password=falkor_password # optional ) kg.process_sources(sources)--- `183-186`: **Standardize print usage to match the Quick Start example** Earlier you access response["response"]; here you print the raw response. Aligning improves consistency and reduces confusion. ```diff -response = chat.send_message("Who is the director of the movie The Matrix?") -print(response) -response = chat.send_message("How this director connected to Keanu Reeves?") -print(response) +response = chat.send_message("Who is the director of the movie The Matrix?") +print(response["response"]) +response = chat.send_message("How is this director connected to Keanu Reeves?") +print(response["response"])
258-272
: Orchestrator example: import and API contractAdd the import for Orchestrator and verify that ask returns an object with the output attribute as shown.
```python +# Import if needed +from graphrag_sdk.agents import Orchestrator + # Initialize the orchestrator while giving it the backstory. orchestrator = Orchestrator( model, backstory="You are a trip planner, and you want to provide the best possible itinerary for your clients.", ) # Register the agents that we created above. orchestrator.register_agent(restaurants_agent) orchestrator.register_agent(attractions_agent) # Query the orchestrator. -runner = orchestrator.ask("Create a two-day itinerary for a trip to Rome. Please don't ask me any questions; just provide the best itinerary you can.") -print(runner.output) +runner = orchestrator.ask("Create a two-day itinerary for a trip to Rome. Please don't ask me any questions; just provide the best itinerary you can.") +print(runner.output) # Verify this attribute; if ask returns str/dict, print accordingly.--- `60-115`: **Optional: align section hierarchy under “How to use”** Given “Environment Configuration” is now H2, “Quick Start with Existing Knowledge Graph” and “Creating Knowledge Graphs from Scratch” should also be H2 (they already are). Consider keeping only a single H1 at the very top of the README for consistency across the document. </blockquote></details> </blockquote></details> <details> <summary>📜 Review details</summary> **Configuration used**: CodeRabbit UI **Review profile**: CHILL **Plan**: Pro **💡 Knowledge Base configuration:** - MCP integration is disabled by default for public repositories - Jira integration is disabled by default for public repositories - Linear integration is disabled by default for public repositories You can enable these sources in your CodeRabbit configuration. <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 68621967fc015cae42e919f26bf1f5da2482ac1c and fd49d0278ae629baa52e2907fba3fc4842eb3cc1. </details> <details> <summary>📒 Files selected for processing (1)</summary> * `README.md` (1 hunks) </details> <details> <summary>🧰 Additional context used</summary> <details> <summary>🪛 markdownlint-cli2 (0.17.2)</summary> <details> <summary>README.md</summary> 45-45: Heading levels should only increment by one level at a time Expected: h2; Actual: h3 (MD001, heading-increment) </details> </details> </details> <details> <summary>⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)</summary> * GitHub Check: test (openai/gpt-4.1) </details> <details> <summary>🔇 Additional comments (1)</summary><blockquote> <details> <summary>README.md (1)</summary><blockquote> `89-91`: **Verify default model/provider for LiteModel() and consider making it explicit** Comment claims default is OpenAI GPT‑4.1. Please confirm this is accurate in the current release and consider specifying explicitly to avoid ambiguity. ```diff -model = LiteModel() # Default is OpenAI GPT-4.1, can specify different model +model = LiteModel(model_name="openai/gpt-4.1") # Make provider/model explicit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
309-324
: Syntax error in example: missing comma after qa_prompt argument.The code block under “Additional Enhancement: Configuring your prompts” is currently invalid Python. You need a comma after qa_prompt before host.
Apply this diff to fix it:
cypher_gen_prompt=cypher_gen_prompt, cypher_gen_prompt_history=cypher_gen_prompt_history, - qa_prompt=qa_prompt + qa_prompt=qa_prompt, host="127.0.0.1", port=6379, # username=falkor_username, # optional # password=falkor_password # optional )
♻️ Duplicate comments (2)
README.md (2)
75-81
: Port casting fixed correctly (matches prior review feedback).The example now casts FALKORDB_PORT to int when constructing FalkorDB. This resolves client connection issues noted previously.
93-101
: Consistent int casting for KnowledgeGraph constructor.Good job mirroring the port casting here as well; keeps the examples consistent and prevents subtle type bugs.
🧹 Nitpick comments (6)
README.md (6)
45-58
: Heading level increment violation (MD001).After the H1 “How to use”, this should be an H2 rather than H3.
Apply this minimal change:
-### Environment Configuration +## Environment Configuration
88-91
: Be explicit about the model instead of relying on a possibly changing default.Avoid “magic defaults” in docs. Make the model explicit and configurable via an env var.
Apply this diff:
-# Configure model and create GraphRAG instance -model = LiteModel() # Default is OpenAI GPT-4.1, can specify different model -model_config = KnowledgeGraphModelConfig.with_model(model) +# Configure model and create GraphRAG instance +# Example: set LLM_MODEL=openai/gpt-4.1 (or another LiteLLM-supported model) +model = LiteModel(model_name=os.getenv("LLM_MODEL", "openai/gpt-4.1")) +model_config = KnowledgeGraphModelConfig.with_model(model)If “openai/gpt-4.1” isn’t the recommended/current default for LiteModel in this SDK, please adjust the default string accordingly.
64-71
: Unify import style for Ontology to reduce confusion.Two different import styles are used across the README. Prefer a single approach throughout.
For consistency with later sections, consider this change:
-from graphrag_sdk import KnowledgeGraph -from graphrag_sdk.ontology import Ontology +from graphrag_sdk import KnowledgeGraph, OntologyAlternatively, switch the Step 1 example to use the same submodule import as here—just keep it consistent.
175-186
: Grammar fix in Step 3 description and output consistency.Minor grammar tweak and align the print with earlier examples that access response["response"].
-At this point, you have a Knowledge Graph that can be queried using this SDK. Use the method `chat_session` for start a conversation. +At this point, you have a Knowledge Graph that can be queried using this SDK. Use the method `chat_session` to start a conversation. @@ -response = chat.send_message("Who is the director of the movie The Matrix?") -print(response) +response = chat.send_message("Who is the director of the movie The Matrix?") +print(response["response"]) @@ -response = chat.send_message("How this director connected to Keanu Reeves?") -print(response) +response = chat.send_message("How is this director connected to Keanu Reeves?") +print(response["response"])Confirm that chat.send_message returns a dict with "response" in all flows; otherwise keep print(response) here.
212-251
: Missing imports in the Agents example.KGAgent is used but never imported in this block, which will confuse readers copying the snippet.
Add the appropriate import (verify the correct module path in the SDK):
```python +# from graphrag_sdk.agents import KGAgent # Define the model model = LiteModel(model_name="openai/gpt-4.1")
If KGAgent lives under a different namespace, please update the path accordingly.
258-263
: Missing import in the Orchestrator example.Orchestrator is referenced without an import in this snippet.
Add the import (verify the correct path):
```python +# from graphrag_sdk.agents import Orchestrator # Initialize the orchestrator while giving it the backstory. orchestrator = Orchestrator( model, backstory="You are a trip planner, and you want to provide the best possible itinerary for your clients.", )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
README.md
(1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.17.2)
README.md
45-45: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3
(MD001, heading-increment)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: test (openai/gpt-4.1)
🔇 Additional comments (1)
README.md (1)
60-113
: Update model recommendations in README to be future-proofRather than hard-coding exact model IDs (which may change over time), present them as illustrative examples and point users to each provider’s official catalog for the latest names.
• In README.md (lines 60-113), update the LiteModel comment:
- model = LiteModel() # Default is OpenAI GPT-4.1, can specify different model + model = LiteModel() # Default is an OpenAI GPT-4 series model (e.g. “gpt-4.1”); refer to OpenAI’s model catalog for up-to-date IDs• If you mention Google Gemini or Azure OpenAI elsewhere, apply the same pattern:
- LiteModel("gemini-2.0-flash") + LiteModel("gemini-2.0-flash") # example only—see Google’s Gemini model list for the latest versions• Add a brief note or link under “Quick Start with Existing Knowledge Graph” directing users to:
- OpenAI model documentation
- Google Cloud AI Platform model list
- Azure OpenAI deployments guide
This keeps the README concise yet durable, and ensures users always find the most current model IDs without frequent updates here.
Add quick start example for existing knowledge graphs
Summary by CodeRabbit