MCP and Product AI: How the Model Context Protocol Is Transforming Product Knowledge Integration

The Model Context Protocol (MCP) is fast becoming the standard way AI agents connect to external knowledge. Here's what it means for B2B product catalogs — and how to expose your product knowledge as an MCP server.

Axoverna Team
14 min read

Something quiet happened in the AI ecosystem over the past year that most B2B software teams haven't fully processed yet: the emergence of a standard way for AI models to talk to external tools and data sources.

The Model Context Protocol — MCP, originally introduced by Anthropic — has moved from experimental specification to de-facto standard with surprising speed. Claude, GPT-4o, Gemini, and virtually every serious AI assistant now supports MCP-compatible tool calling in some form. Enterprise deployments of AI copilots — sales assistants, procurement agents, technical support bots — increasingly expect their knowledge sources to expose a well-defined interface.

For B2B companies with product catalogs, this is a significant shift. It means your product knowledge isn't just a backend asset for your own chatbot widget. It becomes a queryable service that any MCP-compatible AI agent can use — with your access controls, your freshness guarantees, and your retrieval quality, served through a standard interface.

This article explains what MCP is, why it matters for product data specifically, and what a production-grade MCP server for a product knowledge base actually looks like.


What Is MCP, Actually?

MCP is an open protocol that defines how an AI model (or agent) communicates with external servers to retrieve information, execute actions, and receive structured data. Think of it as a common language that lets a general-purpose AI assistant query your product catalog the same way it might query a calendar, a CRM, or a code repository.

At its core, MCP defines three primitives:

  • Resources: Addressable content the server exposes (a product record, a specification sheet, a catalog category)
  • Tools: Functions the AI can invoke (search products, look up a SKU, compare two models)
  • Prompts: Reusable prompt templates the server recommends for common workflows

When a user asks their AI assistant "which of your pumps can handle slurries above 80°C?", and that assistant has an MCP connection to your product knowledge server, the query goes through a clean, authenticated API call — not a scrape, not a hard-coded function, not a bespoke integration. The assistant calls your search_products tool with the query, gets back structured results, and incorporates them into its answer.

The protocol handles authentication, pagination, error handling, and context limits in a standardized way. Your product knowledge server just needs to implement the spec.


Why Product Knowledge Is a Natural Fit for MCP

Not all data sources benefit equally from MCP. Product catalogs have several properties that make them particularly well-suited:

Products Are Semantically Rich but Structurally Varied

A lighting fixture has lumens, color temperature, beam angle, IP rating, and wattage. A hydraulic hose has burst pressure, temperature range, inner diameter, and end fitting compatibility. A chemical additive has CAS number, hazard class, recommended concentration, and application temperature.

No fixed relational schema handles all of this cleanly. But an MCP tool can expose a flexible search interface that understands the semantic structure of each category and returns the right fields for the right product type. The AI agent doesn't need to know your schema — it asks a question in natural language and gets back structured, relevant data.

Product Queries Are Intent-Driven

As we've covered in our analysis of query intent classification, the queries people ask about products come in distinct types: comparison, compatibility, specification lookup, application guidance, and availability checks. An MCP server can expose specialized tools for each intent type rather than forcing every query through a generic search endpoint.

This matters because agents can choose the right tool for the job. A question like "is the V402 compatible with 3/4-inch NPT fittings?" is a compatibility check — the agent calls check_compatibility, not search_products. The result is more precise and the context window usage is more efficient.

Product Data Changes Frequently

Prices update. Products discontinue. Specifications get revised. Stock changes daily. An MCP server that wraps a live product knowledge system naturally handles freshness — every query hits your current data, not a static export. We covered the challenges of catalog sync and RAG freshness in detail; MCP makes the answer simple: the agent asks the server at query time, and the server answers from live data.


The Architecture: Product AI as an MCP Server

Here's what a production MCP server for a B2B product catalog looks like architecturally:

AI Agent (Claude, GPT-4o, etc.)
        │
        │  MCP protocol (JSON-RPC over HTTP/SSE)
        ▼
┌──────────────────────────────────────────┐
│           MCP Product Server             │
│                                          │
│  ┌──────────────────────────────────┐    │
│  │         Tool Handlers            │    │
│  │  search_products                 │    │
│  │  get_product_detail              │    │
│  │  check_compatibility             │    │
│  │  compare_products                │    │
│  │  find_alternatives               │    │
│  │  get_documentation               │    │
│  └──────────────────────────────────┘    │
│                  │                       │
│                  ▼                       │
│  ┌──────────────────────────────────┐    │
│  │      Retrieval Engine            │    │
│  │  Hybrid search (BM25 + vectors)  │    │
│  │  Metadata filtering              │    │
│  │  Cross-encoder reranking         │    │
│  └──────────────────────────────────┘    │
│                  │                       │
│                  ▼                       │
│  ┌──────────────────────────────────┐    │
│  │      Knowledge Store             │    │
│  │  Vector index (product chunks)   │    │
│  │  BM25 index                      │    │
│  │  Product catalog DB              │    │
│  └──────────────────────────────────┘    │
└──────────────────────────────────────────┘
        │
        ▼
   Live catalog
   (PIM / ERP / e-commerce platform)

The MCP layer is relatively thin — its job is to translate MCP tool calls into retrieval operations and format the results in a way the agent can use. The heavy lifting remains in the retrieval engine, exactly as we've described in our coverage of hybrid search and reranking.


Defining Your Tool Set

The most consequential design decision is which tools to expose. Too few, and the agent is forced into awkward workarounds. Too many, and you create noise that confuses the model's tool selection.

For a typical B2B product catalog, a well-designed MCP server exposes five to eight tools:

search_products

The workhorse. Handles natural language queries and returns a ranked list of matching products with key attributes.

{
  name: "search_products",
  description: `Search the product catalog using natural language.
    Returns ranked results with product names, SKUs, and key specifications.
    Use for exploratory queries, application-based searches, and when
    you don't have an exact part number.`,
  inputSchema: {
    type: "object",
    properties: {
      query: {
        type: "string",
        description: "Natural language product query"
      },
      category: {
        type: "string",
        description: "Optional category filter (e.g., 'pumps', 'connectors')"
      },
      limit: {
        type: "number",
        description: "Number of results to return (default 5, max 20)",
        default: 5
      }
    },
    required: ["query"]
  }
}

get_product_detail

Retrieves the full specification, documentation links, and metadata for a specific product by SKU or internal ID.

{
  name: "get_product_detail",
  description: `Retrieve complete product information for a specific SKU or product ID.
    Returns full specifications, technical parameters, documentation links,
    and related product suggestions. Use when you have an exact identifier.`,
  inputSchema: {
    type: "object",
    properties: {
      identifier: {
        type: "string",
        description: "SKU, part number, or internal product ID"
      }
    },
    required: ["identifier"]
  }
}

check_compatibility

A specialized tool for compatibility queries — by far the most common failure mode for generic product search. When a buyer asks "will the FX-200 work with my existing 22mm push-fit fittings?", a compatibility check tool can reason explicitly about connection standards, material compatibility, and dimensional fit.

{
  name: "check_compatibility",
  description: `Check whether two products are compatible with each other,
    or whether a product is compatible with a specified standard, fitting type,
    or application requirement. Returns compatibility verdict with explanation.`,
  inputSchema: {
    type: "object",
    properties: {
      product_id: {
        type: "string",
        description: "SKU or ID of the product to check"
      },
      compatible_with: {
        type: "string",
        description: "What to check compatibility against: another SKU, a standard (e.g. 'DIN EN 10226'), or a specification"
      }
    },
    required: ["product_id", "compatible_with"]
  }
}

compare_products

Retrieves structured, side-by-side comparison data for two or more products. More useful than asking the agent to synthesize a comparison from multiple get_product_detail calls — you can control which attributes appear in the comparison and ensure the agent sees a clean, aligned view.

find_alternatives

Given a product that's out of stock, discontinued, or outside budget, returns functionally equivalent alternatives. This requires your knowledge base to have encoded product relationships — something that emerges naturally from a well-structured product knowledge ingestion pipeline.

get_documentation

Returns technical documentation, installation guides, safety data sheets, or application notes for a product. Especially valuable for complex products where the agent needs to reason about installation or usage, not just specifications.


What This Unlocks: The Agent Distribution Model

Here's the strategic implication that often gets missed in technical discussions of MCP: it decouples your product knowledge from any specific AI front-end.

Before MCP, if you wanted your product data accessible to an AI assistant, you had one real option: build a chat widget, integrate it with your knowledge base, embed it in your website. The AI lived inside your product experience.

With MCP, your product knowledge becomes a service that AI agents consume. A sales rep using an enterprise AI copilot can access your catalog through their existing assistant — without switching tabs, without learning a new UI, without installing your widget. A procurement system can pull compatibility checks during automated RFQ processing. A technical support agent can query your specs to answer warranty and compatibility questions. An ERP integration can use your product knowledge to suggest substitutions during purchase order entry.

The distribution surface explodes — and it's all going through your controlled, authenticated MCP server. You see what's being queried. You enforce your access policies. You maintain the freshness and accuracy of the underlying data.


Security and Access Control

Opening your product knowledge to external AI agents introduces security considerations that a website chat widget doesn't face.

Authentication: MCP servers authenticate clients using standard mechanisms — API keys, OAuth 2.0, or mutual TLS for enterprise deployments. Every MCP connection should be authenticated; unauthenticated access is appropriate only if your entire catalog is public.

Authorization: Not all agents should see all products. A distributor's procurement system should see wholesale pricing; a retail customer's assistant should see retail pricing. A partner agent should see your full catalog; a prospect should see only the publicly listed SKUs. MCP servers handle this through session-scoped context — the authenticated identity determines which resources and tools are available.

async function handleSearchProducts(
  params: SearchParams,
  session: MCPSession
): Promise<MCPToolResult> {
  const accessLevel = session.claims.accessLevel  // 'public' | 'partner' | 'distributor'
  const results = await searchProducts({
    query: params.query,
    filters: {
      ...buildAccessFilter(accessLevel),
      category: params.category,
    },
    limit: params.limit ?? 5,
  })
  return formatSearchResults(results, accessLevel)
}

Rate limiting and cost control: AI agents can be chatty. A single agentic workflow might call your MCP server dozens of times. Rate limit by client identity, log all calls, and set per-session token budgets if you're generating embeddings on the fly. Your retrieval costs need to be predictable.

Input validation: The AI agent constructs the tool call parameters, but a malicious prompt injection could attempt to craft parameters that exfiltrate data outside your intended access control. Validate and sanitize all inputs server-side; never trust parameters that arrived from a language model.


Practical Implementation: Standing Up an MCP Server

If you're building this from scratch, the ecosystem tooling has matured substantially. Anthropic's MCP SDK provides TypeScript and Python implementations with type-safe tool registration.

A minimal product search server looks like this:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"
import { z } from "zod"
 
const server = new McpServer({
  name: "product-knowledge",
  version: "1.0.0",
})
 
server.tool(
  "search_products",
  "Search the product catalog using natural language",
  {
    query: z.string().describe("Natural language product query"),
    category: z.string().optional().describe("Optional category filter"),
    limit: z.number().min(1).max(20).default(5),
  },
  async ({ query, category, limit }) => {
    const results = await hybridProductSearch({ query, category, limit })
    return {
      content: [
        {
          type: "text",
          text: formatProductResults(results),
        },
      ],
    }
  }
)
 
server.tool(
  "get_product_detail",
  "Retrieve complete product information by SKU",
  {
    identifier: z.string().describe("SKU or part number"),
  },
  async ({ identifier }) => {
    const product = await getProductBySKU(identifier)
    if (!product) {
      return {
        content: [{ type: "text", text: `No product found for identifier: ${identifier}` }],
        isError: true,
      }
    }
    return {
      content: [{ type: "text", text: formatProductDetail(product) }],
    }
  }
)
 
const transport = new StdioServerTransport()
await server.connect(transport)

For production deployments, you'll swap StdioServerTransport for an HTTP/SSE transport to support concurrent connections, and add the authentication layer described above.


The Evaluation Question

One thing MCP makes easier that often gets overlooked: evaluating your retrieval quality at scale.

When every query goes through a structured MCP interface, you have a clean log of every tool call, its parameters, and the results returned. That log is a goldmine for retrieval evaluation — you can sample queries, manually annotate the quality of results, and build automated regression tests that run against your retrieval stack on every catalog update.

This contrasts with the traditional chat widget model, where conversations are long-form and hard to decompose into retrievable signals. The structured tool-call model gives you labeled input/output pairs automatically.

Pair this with an LLM-as-judge approach — using a capable model to score retrieval quality against a rubric — and you have a continuous evaluation pipeline that catches retrieval regressions before they affect real users. This is the operational maturity level that enterprise B2B deployments increasingly require.


Agentic Workflows That MCP Enables

Beyond simple query-response interactions, MCP unlocks multi-step agentic workflows that weren't practical before:

Automated RFQ processing: An agent receives a request for quotation email, parses the line items, calls check_compatibility for each item pair, calls find_alternatives for any out-of-stock items, and drafts a response with pricing and lead times — all without human intervention for standard requests.

Proactive substitution suggestion: An ERP system triggers an agent when a purchase order line item shows insufficient stock. The agent calls find_alternatives, filters by the original spec constraints, and creates a substitution proposal in the ERP — before the buyer even notices the shortage.

Technical pre-sales assistance: A sales rep describes a customer application in natural language to their AI copilot. The copilot runs search_products with application context, calls check_compatibility against the customer's existing equipment (retrieved from CRM), and returns a shortlist of qualified products with compatibility notes — in seconds, without the rep needing to know your full catalog.

These workflows aren't hypothetical — they're happening now in early adopter B2B environments. The common thread: they require structured, reliable, authenticated access to product knowledge. MCP is the interface that makes product AI a first-class integration target rather than a siloed widget.


Where This Is Heading

MCP is a symptom of a larger shift: AI is moving from monolithic applications to composable systems. Instead of one AI that knows everything, we're moving toward orchestration layers that route queries to specialized knowledge services — each good at its domain, exposing a clean interface, maintainable independently.

Product knowledge is one of the most valuable domain services a B2B company can expose. Your catalog, your specifications, your compatibility data — these aren't just marketing assets. They're the information that closes deals, prevents misorders, and reduces support costs. Making that information accessible to any AI agent your customers and sales teams use is the next logical step after building the product AI itself.

The companies that expose their product knowledge as well-designed MCP services in 2026 are the ones whose products will be recommended by AI copilots in 2027. Distribution through AI agents is the new SEO — and the foundations are being laid right now.


Build on a Retrieval Stack Designed for MCP

Axoverna's product AI is architected around the principles described in this article — hybrid retrieval, intelligent chunking, reranking, and live catalog sync. Our MCP server integration exposes your product knowledge to any MCP-compatible AI agent with authentication, access control, and the retrieval quality you'd expect from a purpose-built product AI system.

Book a demo to see how your product catalog looks from an MCP tool call, or start a free trial and connect your first AI agent to your product knowledge in under an hour.

Ready to get started?

Turn your product catalog into an AI knowledge base

Axoverna ingests your product data, builds a semantic search index, and gives you an embeddable chat widget — in minutes, not months.