LLM Text Generation Simulator

See a simplified step-by-step process of how AI generates text.

Generated Text:

How It Works (A Simple Analogy)

A real Large Language Model (LLM) is incredibly complex, but this simulation uses a simple rule-based approach to show the core idea: predicting the next word.

  1. Input as Tokens: Your prompt is broken down into words, which we can think of as "tokens". The model looks at the last token to decide what to write next.
  2. Finding a Probable Next Word: Based on the current word, the model looks up a list of possible words that could follow. In this simple demo, it just picks one randomly from a pre-defined list. A real LLM calculates probabilities for thousands of possible next words based on patterns it learned from massive amounts of text data.
  3. Appending and Repeating: The chosen word is added to the sequence. It then becomes the new "current word", and the process repeats, generating text one token at a time.

This step-by-step generation is why you sometimes see models typing out their answers, as they are actively "thinking" of the next best word to add to the sentence.