Scalable LLM tool orchestration SDK with memory hooks, parallel Multi function execution, and GPT-4o (or any model) planning — built for production AI agents. Alternative of MCP for AI or LLM Parallel multi tool calling which is easy to understand.
- ✅ Register tools with full JSON schema
- 🧠 Pre/post tool memory lifecycle hooks
- 🔍 Plans tools dynamically using OpenAI
/v1/responses
API (⚠️ not chat completion) - 🛠️ Parallel tool execution with context-rich synthesis
- 🔄 ESM + CommonJS ready for Node.js and serverless
- 🔌 Works with any GPT model (
gpt-4o
,gpt-4-turbo
, etc.) — dynamic model control
npm install @gopluto_ai/llm-tools-orchestration
Set in .env
OPENAI_API_KEY = xxxxxxxxxxxx
import {
registerTool,
planTools,
executeParallelTools,
synthesizeFinalReply,
registerHookProcessor
} from "@gopluto_ai/llm-tools-orchestration";
import { getOpenAIResData } from "@gopluto_ai/llm-tools-orchestration/dist/openaiHelpers";
registerHookProcessor("logStart", async (memory) => {
console.log("🧠 Memory:", memory);
return memory;
});
registerTool({
type: "function",
name: "get_stock_price",
description: "Returns dummy stock price",
parameters: {
type: "object",
required: ["ticker", "currency"],
properties: {
ticker: { type: "string" },
currency: { type: "string" }
}
},
preHooks: ["logStart"],
handler: async ({ ticker, currency }) => {
return { ticker, currency, price: 999.99 };
}
});
const messages = {
sysprompt: "You are a stock price assistant.",
userMessage: "What's the price of TSLA in USD?",
conversationHistory: [],
agentMemory: {},
imageUrl:'',
fileUrl:''
};
(async () => {
const plan = await planTools(messages, getOpenAIResData, "gpt-4o");
const results = await executeParallelTools(plan.neededTools, plan.args, { userId: "xyz" });
const reply = await synthesizeFinalReply(messages.userMessage, results, messages, plan.tools, getOpenAIResData, "gpt-4o");
console.log("🧠 Final AI Reply:", reply);
})();
src/
├── index.ts # Entry point
├── toolOrchestrator.ts # Tool registration + planning
├── openaiHelpers.ts # Handles OpenAI /v1/responses payloads
examples/
└── index.js # CLI-ready use case
Unlike typical chat/completions
, this SDK uses the new /v1/responses
API to support multi-modal inputs (text, file, image) and tool usage natively.
This gives you:
- Context-rich messages (system + memory)
- Native tool calling structure
- Easy agent memory injection
- Full control over function outputs
- Clone this repo
- Run
npm install && npm run build
- Edit tools in
src/toolOrchestrator.ts
- Submit PRs!
Email: shubham@e2ecapital.com
Contact: +91 9110035665
Whatsapp: https://wa.me/919110035665
MIT © GoPluto.ai