Promptly Technologies LLC mcp structured thinking

Promptly Technologies LLC mcp structured thinking avatar

by Promptly-Technologies-LLC

A TypeScript Model Context Protocol (MCP) server to allow LLMs to programmatically construct mind maps to explore an idea space, with enforced "metacognitive" self-reflection

What is Promptly Technologies LLC mcp structured thinking

Structured Thinking MCP Server

A TypeScript Model Context Protocol (MCP) server based on Arben Ademi's Sequential Thinking Python server. The motivation for this project is to allow LLMs to programmatically construct mind maps to explore an idea space, with enforced "metacognitive" self-reflection.

Setup

Set the tool configuration in Claude Desktop, Cursor, or another MCP client as follows:

{
  "structured-thinking": {
    "command": "npx",
    "args": ["-y", "structured-thinking"]
  }
}

Overview

Thought Quality Scores

When an LLM captures a thought, it assigns that thought a quality score between 0 and 1. This score is used, in combination with the thought's stage, for providing "metacognitive" feedback to the LLM how to "steer" its thinking process.

Thought Stages

Each thought is tagged with a stage (e.g., Problem Definition, Analysis, Ideation) to help manage the life-cycle of the LLM's thinking process. In the current implementation, these stages play a very important role. In effect, if the LLM spends too long in a given stage or is having low-quality thoughts in the current stage, the server will provide feedback to the LLM to "steer" its thinking toward other stages, or at least toward thinking strategies that are atypical of the current stage. (E.g., in deductive mode, the LLM will be encouraged to consider more creative thoughts.)

Thought Branching

The LLM can spawn “branches” off a particular thought to explore different lines of reasoning in parallel. Each branch is tracked separately, letting you manage scenarios where multiple solutions or ideas should coexist.

Memory Management

The server maintains a "short-term" memory buffer of the LLM's ten most recent thoughts, and a "long-term" memory of thoughts that can be retrieved based on their tags for summarization of the entire history of the LLM's thinking process on a given topic.

Limitations

Naive Metacognitive Monitoring

Currently, the quality metrics and metacognitive feedback are derived mechanically from naive stage-based multipliers applied to a single self-reported quality score.

As part of the future work, I plan to add more sophisticated metacognitive feedback, including semantic analysis of thought content, thought verification processes, and more intelligent monitoring for reasoning errors.

Lack of User Interface

Currently, the server stores all thoughts in memory, and does not persist them to a file or database. There is also no user interface for reviewing the thought space or visualizing the mind map.

As part of the future work, I plan to incorporate a simple visualization client so the user can watch the thought graph evolve.

MCP Tools

The server exposes the following MCP tools:

capture_thought

Create a thought in the thought history, with metadata about the thought's type, quality, content, and relationships to other thoughts.

Parameters:

  • thought: The content of the current thought
  • thought_number: Current position in the sequence
  • total_thoughts: Expected total number of thoughts
  • next_thought_needed: Whether another thought should follow
  • stage: Current thinking stage (e.g., "Problem Definition", "Analysis")
  • is_revision (optional): Whether this revises a previous thought
  • revises_thought (optional): Number of thought being revised
  • branch_from_thought (optional): Starting point for a new thought branch
  • branch_id (optional): Identifier for the current branch
  • needs_more_thoughts (optional): Whether additional thoughts are needed
  • score (optional): Quality score (0.0 to 1.0)
  • tags (optional): Categories or labels for the thought

revise_thought

Revise a thought in the thought history, with metadata about the thought's type, quality, content, and relationships to other thoughts.

Parameters:

  • thought_id: The ID of the thought to revise
  • Parameters from capture_thought

retrieve_relevant_thoughts

Retrieve thoughts from long-term storage that share tags with the specified thought.

Parameters:

  • thought_id: The ID of the thought to retrieve relevant thoughts for

get_thinking_summary

Generate a comprehensive summary of the entire thinking process.

clear_thinking_history

Clear all recorded thoughts and reset the server state.

License

MIT

Leave a Comment

Frequently Asked Questions

What is MCP?

MCP (Model Context Protocol) is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications, providing a standardized way to connect AI models to different data sources and tools.

What are MCP Servers?

MCP Servers are lightweight programs that expose specific capabilities through the standardized Model Context Protocol. They act as bridges between LLMs like Claude and various data sources or services, allowing secure access to files, databases, APIs, and other resources.

How do MCP Servers work?

MCP Servers follow a client-server architecture where a host application (like Claude Desktop) connects to multiple servers. Each server provides specific functionality through standardized endpoints and protocols, enabling Claude to access data and perform actions through the standardized protocol.

Are MCP Servers secure?

Yes, MCP Servers are designed with security in mind. They run locally with explicit configuration and permissions, require user approval for actions, and include built-in security features to prevent unauthorized access and ensure data privacy.

Related MCP Servers

chrisdoc hevy mcp avatar

chrisdoc hevy mcp

mcp
sylphlab pdf reader mcp avatar

sylphlab pdf reader mcp

An MCP server built with Node.js/TypeScript that allows AI agents to securely read PDF files (local or URL) and extract text, metadata, or page counts. Uses pdf-parse.

pdf-parsetypescriptnodejs
aashari mcp server atlassian bitbucket avatar

aashari mcp server atlassian bitbucket

Node.js/TypeScript MCP server for Atlassian Bitbucket. Enables AI systems (LLMs) to interact with workspaces, repositories, and pull requests via tools (list, get, comment, search). Connects AI directly to version control workflows through the standard MCP interface.

atlassianrepositorymcp
aashari mcp server atlassian confluence avatar

aashari mcp server atlassian confluence

Node.js/TypeScript MCP server for Atlassian Confluence. Provides tools enabling AI systems (LLMs) to list/get spaces & pages (content formatted as Markdown) and search via CQL. Connects AI seamlessly to Confluence knowledge bases using the standard MCP interface.

atlassianmcpconfluence
prisma prisma avatar

prisma prisma

Next-generation ORM for Node.js & TypeScript | PostgreSQL, MySQL, MariaDB, SQL Server, SQLite, MongoDB and CockroachDB

cockroachdbgomcp
Zzzccs123 mcp sentry avatar

Zzzccs123 mcp sentry

mcp sentry for typescript sdk

mcptypescript
zhuzhoulin dify mcp server avatar

zhuzhoulin dify mcp server

mcp
zhongmingyuan mcp my mac avatar

zhongmingyuan mcp my mac

mcp
zhixiaoqiang desktop image manager mcp avatar

zhixiaoqiang desktop image manager mcp

MCP 服务器,用于管理桌面图片、查看详情、压缩、移动等(完全让Trae实现)

mcp
zhixiaoqiang antd components mcp avatar

zhixiaoqiang antd components mcp

An MCP service for Ant Design components query | 一个减少 Ant Design 组件代码生成幻觉的 MCP 服务,包含系统提示词、组件文档、API 文档、代码示例和更新日志查询

designantdapi

Submit Your MCP Server

Share your MCP server with the community

Submit Now