The Most Dangerous Shortcut in AI
You can point GPT-4 at your catalog and say 'fix it.' For ten products, it works. For five thousand, it breaks in ways you won't notice until the damage is done.

The Illusion
The pitch is seductive.
"Just use the LLM. Point it at your catalog. Tell it to fix everything."
For a one-off task (rewriting ten product descriptions) it works fine. The output is good. Sometimes better than what a human would write.
So why build anything else?
The Tension
Ask GPT-4 to "improve" the same product description twice. You'll get two different outputs. Ask it to fill "material" for 200 products and you'll get "100% cotton," "Cotton," "cotton fabric," and "Pure cotton," all for the same material.
No audit trail. If you pipe output directly into your catalog, you have no record of what changed, why, or who approved it. Enterprise brands need change logs. They need rollback. They need to know which products were modified and what the previous values were.
No safety net. An unconstrained LLM might decide to "improve" your product title. Or rewrite a legally required compliance field. Or change pricing-adjacent data. Without boundaries, there's no way to prevent this.
No composability. Direct LLM calls don't compose. You can't say "only fix products over $50" or "don't touch hero SKUs" or "auto-approve low-risk changes but flag high-risk ones."
The Turning Point
The shortcut works until it doesn't. And when it doesn't, the failure is silent.
You won't see the inconsistent values until a customer complains. You won't notice the overwritten compliance field until an audit. You won't catch the hallucinated GTIN until Google Shopping rejects your feed.
The danger isn't that the LLM is wrong. The danger is that it's wrong in ways that look right.
The Risk
Every catalog team that takes the shortcut eventually hits the same wall. The first batch looks great. The second batch introduces drift. By the fifth batch, the catalog has more inconsistencies than it started with, but now they're harder to find because they look intentional.
This is the most dangerous shortcut in AI: the one that works just well enough to delay the reckoning.
The Principle
The LLM provides intelligence. The system provides safety.
In EKOM, the LLM is one component inside a structured pipeline. It generates proposed attribute values, but those proposals are validated against the canonical schema, scored for confidence, wrapped in a Patch object, and queued for approval.
The conversational layer translates intent into structured constraints. The engine executes within those constraints. The audit log records everything.
Intelligence without structure is a liability. Structure without intelligence is a spreadsheet. You need both.
The Future
The question was never "can AI fix my catalog?" It can.
The question is: "Can I trust the system that does it?"
That trust requires more than a model. It requires architecture.