25. What Breaks When You Put an LLM Inside a Control Loop

Failure Patterns Explained by Structure

tags: [LLM, AI, Design, Mermaid]


🧭 Introduction

In the previous article, we clarified that an LLM is essentially
a “context → probability → generation” transformer.

The next natural question is this:

What happens if you put that LLM inside a control or decision loop?

The answer, bluntly stated, is:

It will break — with high probability.

In this article, we visualize—using structure (Mermaid diagrams)


🎯 A Common but Wrong Architecture

Let’s start with a very common setup.

flowchart TD
    S[Sensor / Input] --> L[LLM]
    L --> C[Control / Decision]
    C --> A[Actuator / Output]
    A --> S

At a glance, it looks like the system:

But structurally, this design is fundamentally broken.


❌ Where It Fails (Immediate Fatal Points)

① No State Retention

flowchart TD
    X[Current Input] --> L[LLM]
    L --> Y[Output]

Each time, the LLM:

It determines output using only the current input.

The essential elements required for control—

state, history, and continuity—do not exist.


② No Ability to Evaluate Correctness

flowchart TD
    I[Input] --> L[LLM]
    L --> O[Output]
    O --> ?[Is this correct?]

That question mark
cannot be answered by the LLM itself.

In short:

Even when it is wrong, it cannot notice that it is wrong.


③ No Concept of Time

flowchart TD
    T1[Time t] --> L1[LLM]
    T2[Time t+1] --> L2[LLM]

For an LLM:

are simply independent inputs.

It cannot internally represent:

or any other time-dependent control concept.


🧨 The “Looks Like It Works” Trap

What makes this dangerous is that
it often appears to work at first.

But once:

the failure pattern emerges:

• Decisions oscillate
• Outputs become unstable
• Explanations stay confident while results collapse

The most dangerous aspect is this:

It fails confidently.


🧱 A Minimal Non-Breaking Placement

So how should an LLM be placed?

flowchart TD
    S[Input / Logs] --> L[LLM]
    L --> P[Suggestions / Explanations]

    P --> H[Human or Rules]
    H --> C[Control / Decision]
    C --> A[Output]

Key principles:

The LLM:

does not decide
does not act


🛠 Where LLMs Are Actually Safe and Effective

LLMs are strongest at:

In other words:

Only before the decision stage

Keeping LLMs in this role
dramatically reduces failure risk.


✅ Summary

Using an LLM for:
• Control ❌
• Judgment ❌
• Final decisions ❌

Using an LLM for:
• Explanation ⭕
• Organization ⭕
• Drafting ⭕

LLMs look intelligent, but
they only perform intelligence-like behavior.

LLMs are not made dangerous by their capability.
They become dangerous when placed incorrectly.


📌 Next Preview

Next, building on this foundation:

will be explained again
with a single Mermaid diagram.

If continued,
26 will be about “structures that do not break.”