Doby’s ESA Pyxel Logs🌙

🤖 Exploring MCP for AI-Assisted Tooling (WIP)

This article documents an ongoing exploration and reflects my current understanding. It is intentionally work-in-progress and not a finalized design.

Summary: Exploratory design notes on using the Model Context Protocol (MCP) to expose documentation, schemas, and examples to LLM-based tools in a structured and maintainable way.

Reference implementation:
HoloViz MCP

📖 Overview

As AI-assisted development tools become more common, a key challenge is providing large language models with structured, accurate, and up-to-date context about complex software systems.

The Model Context Protocol (MCP) proposes a standardized way to expose documentation, examples, and other project knowledge to LLMs via a dedicated server, rather than relying on ad-hoc prompt construction.

In practice, MCP servers are typically run locally or within controlled environments and are explicitly queried by compatible tools. MCP does not function as a hosted or always-on service; instead, it acts as an opt-in bridge that allows external clients to retrieve structured project knowledge on demand.

These notes document an ongoing exploration of MCP concepts, inspired by potential future applications in scientific tooling and developer-facing interfaces.

Exploration Context

  • Domain: AI-assisted tooling / developer experience
  • Focus: Model Context Protocol (MCP)
  • Reference: HoloViz MCP implementation
  • Type: Research / Design Notes / Work-in-Progress

🧠 Motivation

Many complex software projects rely on extensive documentation, examples, and configuration schemas. While this information is valuable to humans, it is not always easily consumable by AI assistants in a reliable or structured manner.

MCP offers an alternative approach: exposing curated project knowledge through a well-defined protocol, enabling AI tools to query relevant context on demand rather than relying on static prompts or incomplete embeddings.

This exploration is motivated by an interest in improving usability, discoverability, and correctness when AI systems assist users in configuring or understanding complex tools.

This separation allows core software projects to remain the single source of truth, while enabling multiple interfaces—graphical, conversational, or programmatic—to interact with the same validated knowledge.


🔍 Reference: HoloViz MCP

The HoloViz MCP project serves as a concrete reference implementation of the Model Context Protocol. It demonstrates how documentation, examples, and API information can be exposed to LLMs via a dedicated MCP server.

  • Clear separation between project code and AI-facing context.
  • Structured access to documentation and examples.
  • Protocol-driven interaction instead of ad-hoc prompting.

Reviewing this implementation helps clarify both the strengths of MCP and the design questions that arise when adapting it to other domains.


While MCP enables powerful AI-assisted workflows, recent research highlights the importance of carefully designed tool boundaries when connecting AI systems to real-world resources.

🔐 Known MCP Security Pitfalls (For Future Reference)

Recent security research has highlighted prompt-injection vulnerabilities in an official reference implementation of a Git-based Model Context Protocol (MCP) server. These findings are relevant to any system that connects AI agents to real tools such as filesystems, version control systems, or execution environments.

Importantly, these issues are not inherent to MCP as a concept, but arise from how tool boundaries and execution privileges are implemented.

Core Issue

The vulnerabilities occur when AI-generated or user-influenced text is passed to real system tools without sufficient validation, isolation, or sandboxing.

  • Insufficient validation of repository paths.
  • Inadequate sanitisation of arguments passed to Git commands.
  • Broader filesystem access than strictly necessary.

As a result, an attacker could potentially trigger unintended Git operations, read or delete arbitrary files, or load sensitive data into an AI model’s context.

This represents a shift where prompt injection becomes a tool-level security issue, rather than a purely language-level concern.

Key Takeaway

AI agents should be treated as untrusted intermediaries when connected to real tools. Security boundaries must be enforced by the surrounding system, not delegated to the AI’s behaviour.

Design Implications for MCP-like Systems

  • Strict separation between suggestion and execution: AI components should propose actions, not perform them directly.
  • Schema-first validation: All AI-generated outputs should be validated against explicit schemas or typed constraints before being accepted.
  • Capability scoping: Tool access should be narrowly defined.
  • No implicit filesystem or shell access: All interactions should be mediated through controlled interfaces.
  • Human-in-the-loop for destructive actions: Explicit confirmation should be required.

🧩 Initial Design Thoughts

  1. Identifying which project artifacts are most valuable to expose.
  2. Ensuring exposed context remains accurate as the project evolves.
  3. Maintaining a clear boundary between exploratory tooling and production code.

❓ Open Questions

  • How granular should MCP-exposed context be?
  • How can schema and documentation updates be kept in sync?
  • What safeguards prevent outdated or misleading AI guidance?

📌 Next Steps

Possible next steps include deeper analysis of MCP server architecture, experimentation with small prototypes, and evaluating how such an approach could complement existing documentation workflows.


✍️ Authored by: Doby Baxter
🧠 Category: AI & MCP Exploration Notes (WIP)

⚙️ Config Lab