Back to Blog
Architecture2026-02-175 min

The Architecture Behind EKOM's AI Search Agent

A structural walkthrough of how EKOM's system actually works — from catalog sync to deployed patch. Four layers, two agents, one rule: agents propose, engines execute.

EKOM has two agents, a canonical engine, and a strict boundary between them. This post walks through the architecture — what each layer does, what flows between them, and why the separation exists.

## The Pipeline

At the highest level, EKOM follows a five-step pipeline:

Read — Sync product data from the source catalog (Shopify, CSV, feed) into EKOM's canonical schema. Every product is normalized into a single typed object with defined attributes, validation rules, and Schema.org mappings.

Detect — Scan the normalized catalog for gaps. Missing attributes, weak descriptions, incomplete structured data, Schema.org coverage issues. Each gap is classified by type, severity, and the effort required to fix it.

Patch — Generate structured fixes for detected gaps. Each fix is a Patch object: a versioned diff with before/after values, a confidence score, a reason code (schema_fix, missing_attr, clarity, compliance), and a risk level.

Approve — Present patches for human review. Individual or batch approval. Low-risk changes can be auto-approved by policy. High-risk changes require manual review. Every approval decision is logged.

Deploy — Write approved patches to the catalog. Every deployment is versioned with a rollback pointer. The previous values are preserved. The audit trail records the complete lifecycle.

## The Two Agents

EKOM's intelligence is split across two agents, each with a distinct scope:

The Enrichment Agent reads your catalog, detects attribute gaps, and generates patches. It operates within the canonical schema — it can only propose changes to known, validated attributes. It can't invent fields. It can't touch protected attributes. It can't bypass scope filters.

The Enrichment Agent is the workhorse. When a merchant says "fill in missing materials for my apparel line," the Enrichment Agent:

  1. 1.Resolves the scope (apparel products with missing "material" attribute)
  2. 2.Reads existing product data for context
  3. 3.Generates proposed values with confidence scores
  4. 4.Wraps each proposal in a Patch object
  5. 5.Queues the patches for approval

It never writes to the catalog. It writes to the patch queue.

The Visibility Agent observes how AI search platforms treat your products. It runs synthetic queries against ChatGPT, Gemini, Perplexity, and other AI platforms to see whether your products appear in responses. These signals are directional, not absolute — there is no analytics API for LLM citations. But directional signal is a step change from zero signal.

The Visibility Agent is read-only. It observes. It doesn't modify catalog data. Its output is synthetic snapshots and opportunity extraction — not traffic metrics. It tells you where you're invisible, not how many people saw you.

## The Canonical Engine

Between the agents and the catalog sits the engine — the validation and execution layer.

The engine does three things:

1. Schema enforcement. Every product in EKOM is normalized against a canonical schema with 24+ attribute rules across core and vertical modules. When an agent proposes a patch, the engine checks: does this attribute exist in the schema? Is the proposed value well-formed? Does it pass validation rules?

2. Policy enforcement. Merchants set policies — protected fields, scope restrictions, confidence thresholds, auto-approval rules. Policies are evaluated before generation, not after. If titles are protected, no agent will attempt a title change — the constraint is enforced at the point of creation, not filtered retroactively.

3. Deployment. Only the engine writes to the catalog. It accepts approved patches, applies them to the normalized product, generates the platform-specific output (Shopify metafields, feed columns, JSON-LD), and pushes the update. Every write is versioned. Every write has a rollback pointer.

## What Flows Between Layers

The architecture is defined by the objects that flow between layers:

Job — Created by the conversational layer. Defines scope (which products), intent (what to do), and constraints (policies, thresholds). Jobs are what agents execute against.

Patch — Created by agents. A versioned diff for a single product attribute. Contains before/after values, confidence score, reason code, risk level, and status (proposed, approved, deployed, rejected, rolled back).

Approval — Created by humans or auto-approval policies. Links a patch (or batch of patches) to a decision and a decision-maker. Logged to the audit trail.

Policy — Created by merchants. Defines constraints that agents and the engine must respect. Protected fields, scope filters, confidence thresholds, auto-approval rules. Policies are checked at generation time, not retroactively.

## Why the Separation

The question people ask most often: why not just let the agent write directly to the catalog?

Because safety and auditability require separation of concerns.

The agent's job is intelligence — reading data, detecting gaps, generating proposals. It should be creative, flexible, and capable of handling ambiguous input.

The engine's job is integrity — validating proposals, enforcing policies, managing versions, ensuring reversibility. It should be strict, deterministic, and incapable of bypassing constraints.

If you merge these responsibilities, you get a system that is either too creative (unsafe writes) or too constrained (useless intelligence). The separation lets each layer excel at its job.

The agent proposes. The engine disposes. The human approves. The audit trail records. That's the architecture.