Article

Recommendation Is Becoming a Security Layer

Commercial decisions are now being made inside systems users don't control or fully see. That creates a new attack surface - and existing transparency mechanisms aren't enough.

Marty Coleman
Marty Coleman
CEO, Second Wind
Dot plot showing persuasion rates across five AI shopping conditions, ranging from 22.4% for Search-Placement to 61.2% for Chat-Persuasion

Adapted from Salvi et al., "Commercial Persuasion in AI-Mediated Conversations" (2026), arXiv:2604.04263. Conversational persuasion in agents raised promoted-product selection from 22.4% to 61.2% in controlled shopping experiments.

A new attack surface has emerged

For the first time, commercial decisions are being mediated by systems that users don't control and can't fully see.

These systems read across the web, interpret what companies do, compare them to alternatives, apply constraints based on the user, and decide what to recommend. Not just what to include - what to choose.

They don't behave like search engines. They behave like decision-makers.

That creates something fundamentally new. A centralized layer where commercial outcomes are shaped, often before a human meaningfully engages with a set of options. In previous systems, discovery and decision-making were separated. Here, they collapse into a single process.

And that process is controlled by the model.

The shift no one is accounting for

In traditional systems, influence was visible. You could see ads, rankings, and placements. Even if they were effective, they were legible. Users understood that what they were seeing had been shaped.

In conversational systems, that boundary disappears. The same model retrieves information, interprets it, constructs a comparison, and delivers a recommendation. There is no separation between information and decision. The output is not a list of possibilities. It is a structured point of view.

That means influence no longer operates at the surface. It operates inside the reasoning itself. It shows up in what gets included, how options are framed, and how tradeoffs are explained.

From the outside, the result feels neutral. Underneath, it is anything but.

What the research actually shows

Recent experimental work out of Princeton looked at how people make decisions when guided by conversational AI instead of traditional search in realistic shopping environments. In two preregistered experiments with more than 2,000 participants, the researchers found that conversational persuasion substantially increased selection of promoted products, while most users still failed to detect the bias. Even explicit sponsored labeling did not meaningfully solve the problem.

But the most important detail wasn't just that persuasion worked.

It's how it worked.

The model didn't need to aggressively push one option. It didn't need to “sell” in any obvious way. It didn't need to persuade. It just reshaped the comparison. It narrowed the set of options, made certain alternatives easier to justify, and quietly underrepresented others.

From the user's perspective, the decision still felt rational.

Because it was.

It was just made inside a system that had already shaped the outcome.

This is not a marketing problem

Most companies still approach this shift as a distribution problem. They focus on increasing mentions, improving rankings, and showing up more often across systems.

But visibility is no longer the point of control.

The moment that matters is not when a system lists companies. It is when it is asked to choose between them.

At that point, the system is not retrieving information. It is constructing an evaluation. Early theoretical work on agentic purchasing makes this explicit: the system's power comes not just from surfacing options, but from soliciting information, narrowing the decision, and structuring the final set of choices.

They decide which options are relevant, how they should be compared, and which one they can justify recommending.

If you're not in the comparison, you don't lose. You never existed in the decision. If you are included but poorly framed, you rarely win. And if your strengths aren't legible in the way the system evaluates, you quietly lose.

That is not a marketing failure. It is a failure at the level of representation inside the decision process.

Recommendation is becoming a security layer

Because whoever controls the comparison controls the outcome.

When a system controls which companies are evaluated, how they are interpreted, and how decisions are justified, it becomes more than a recommendation engine.

It becomes infrastructure for commercial decision-making.

And that infrastructure can break in ways most companies are not equipped to see.

A company can be omitted entirely from a comparison without knowing it. It can be misclassified and evaluated against the wrong alternatives. It can be described in a way that weakens its position relative to competitors.

None of this requires explicit manipulation.

It emerges naturally from how these systems retrieve, interpret, and synthesize information. Across multiple recent benchmarks, these systems still struggle with long-horizon decision-making, personalization, and constraint satisfaction.

From the outside, these outcomes look like inconsistency. In reality, they are structural.

The uncomfortable reality

As these systems become more central to commercial decisions, the incentives around them change.

If a model controls which options are considered, how they are framed, and what gets recommended, then influencing that system becomes extremely valuable.

Not just for companies trying to improve how they are represented - but for the platforms themselves.

The line between neutral recommendation, optimization, and commercial influence is not clearly defined. More importantly, it is not visible to the user.

In search, you can see an ad. In a conversational system, influence can be embedded directly in the reasoning process - in what gets included, what gets emphasized, and what gets left out.

The decision still feels objective.

But the underlying incentives are no longer guaranteed to be. That is a different kind of risk.

What actually matters now

The companies that win in this environment are not the ones that show up most often.

They are the ones that are easiest for the system to evaluate, compare, and justify.

That requires a different kind of work. It means making category placement unambiguous, tying strengths to specific decision contexts, and structuring information in a way that aligns with how systems construct comparisons. It means reducing ambiguity, not increasing volume.

Because when a model makes a recommendation, it is effectively building a case. The company that wins is the one whose case is easiest to construct.

What we're actually doing

Second Wind is not built to increase how often companies show up.

It's built to increase how often they get chosen.

That means focusing on how a company is understood inside the decision process itself - how it is categorized, how it is compared, and how easily a system can justify recommending it.

In practice, this looks less like content optimization and more like sales.

If a human buyer doesn't fully understand your positioning, you can clarify it. If an AI system doesn't, you never get considered.

So the work becomes making sure your positioning is legible, your strengths are tied to real decision scenarios, and your case holds up under comparison.

Because under the hood, that's exactly what's happening. The system is constructing a case. We make sure you win it.

Where this is going

Right now, these systems influence decisions.

Soon, they will execute them.

As they become more integrated and more personalized, they move closer to owning the full flow of evaluation, narrowing, and selection. Research on deep shopping agents and long-horizon product research already points in that direction: the hard problem is no longer simple retrieval, but sustained evidence gathering, preference modeling, and decision orchestration.

At that point, the interface isn't a website.

It's one system deciding between others.

And the companies that win won't be the ones that simply exist online. They'll be the ones that are easiest to evaluate, easiest to understand, and easiest to justify choosing.

The bottom line

Most companies are still asking how to show up.

But that's no longer the right question.

The question is what happens when you do. When the system evaluates your category, compares your options, and has to make a recommendation, are you the one it can confidently choose?

Because in this environment, you're not just being discovered.

You're being evaluated.

And increasingly, that evaluation is the decision.