Understanding Semantic Space
"You shall know a word by the company it keeps." — J.R. Firth, 1957
The British linguist John Rupert Firth had an insight that would take half a century to prove: meaning isn't something locked inside a word. It's something that emerges from the words around it. You don't learn what "ocean" means by looking it up in a dictionary once — you learn it from every sentence you've ever read that contained it. Waves, tides, salt, depth, horizon. The company it keeps.
When you teach a machine to read billions of sentences, it arrives at the same conclusion. Not as a theory — as a geometry. The machine discovers that words have positions in a high-dimensional space. Nobody programs this. It emerges from doing exactly what Firth described — tracking which words keep company together, and translating those co-occurrence patterns into vectors.
WordSpace lets you explore that geometry. You're not just playing a guessing game. You're navigating the actual map of meaning that AI has learned from us.
How Words Become Coordinates
The technique behind this is called word embedding. A neural network reads billions of sentences and learns to predict which words appear near which other words. The byproduct: each word gets a vector — a set of 512 floating-point numbers encoding everything the model learned about that word's distributional behavior.
Think of it as GPS coordinates, but for concepts instead of locations.
"telescope" → [0.023, -0.156, 0.891, 0.044, ..., -0.234] (512 numbers)
"microscope" → [0.019, -0.148, 0.877, 0.051, ..., -0.229] (very similar!)
"happiness" → [-0.445, 0.712, -0.033, 0.298, ..., 0.156] (completely different)
Words with similar meanings get similar numbers. This isn't a trick — it's a discovery. The AI found structure in language that linguists like Firth suspected but couldn't quantify.
The Problem of 512 Dimensions
You can't visualize 512 dimensions. So we project down to 3 — but not with a generic algorithm. WordSpace uses a mystery-aligned projection designed specifically for the game:
- The x-axis maps to how close a word is to the mystery word. Hot words sit near the center, cold words spread outward.
- The other two axes use PCA to preserve semantic clusters, so animals still group together, emotions still cluster, and the space retains meaningful structure.
This means the 3D space encodes game progress and semantic relationships simultaneously. Some distortion is inevitable — two words might appear close in 3D but have only moderate similarity — which is why the similarity score (measured in the full 512-dimensional space) is your ground truth.
For the full technical details on how this projection works, see How We Project Words to 3D.
Similarity: The Fourth Dimension
The percentage you see isn't calculated from the 3D positions. It's the actual similarity between your guess and the mystery word, measured in the full 512-dimensional space using cosine similarity — which compares the direction two words point in, regardless of magnitude.
Think of two arrows from the same starting point. Cosine similarity measures whether they point the same way. An arrow for "telescope" and an arrow for "microscope" point in nearly the same direction, even though they're different words. An arrow for "telescope" and an arrow for "happiness" point in completely different directions.
In practice:
- X, Y, Z tell you where a word is (its semantic address)
- Similarity tells you how close it is to the target (your temperature reading)
The colors help:
- Blue means cold — you're far away
- Red means hot — you're getting close
The arrows point toward the mystery word, growing longer as you warm up.
The Related Words: Your Semantic Radar
When you guess a word, we show you nearby words from our vocabulary. Here's the crucial detail:
- Selected by proximity to YOUR GUESS: These are words that keep similar company to what you typed
- Scored by similarity to the MYSTERY WORD: Each one shows how warm that region is
This gives you a radar sweep. If you guess "ocean" and see "sea" (62%), "water" (58%), "wave" (55%), and "beach" (51%), you know this whole family runs from the mid-50s to low-60s. Worth exploring nearby, but probably not the answer.
If you guess "chair" and see "table" (23%), "furniture" (21%), "seat" (19%) — everything's cold. This family isn't it. Move on.
The related words help you think in clusters, not individual guesses. You're mapping the territory.
Prior Art
WordSpace builds on a lineage of games and educational tools that have explored the intersection of word embeddings and play.
Semantris (Google, 2018) — The first major semantic word game. An arcade-style word-association game powered by ML. Entertaining, but designed for entertainment rather than exploration of the underlying space.
CMU Word Embedding Demo (Touretzky et al., 2022) — An interactive educational tool designed for K-12 students. Teaches word vectors, analogies, and custom semantic dimensions. Presented at EAAI-22. (paper)
Semantle (David Turner, 2022) — The original semantic Wordle. One dimension: a similarity score. No visualization. You navigate blind.
Pimantle (@pimanrules, 2022) — Adds a 2D visual map to the Semantle concept, with positions computed per puzzle. The first semantic word game to show spatial representation.
Contexto (Nildo Junior, 2022) — An accessible Semantle variant with a cleaner interface. Still 1D score-based.
What Makes WordSpace Different
- 3D visualization — Pimantle pioneered per-puzzle 2D maps; WordSpace extends this to three dimensions, adding depth and spatial orientation
- Explicit axis semantics — one axis maps directly to similarity with the mystery word, while the other two use PCA to preserve semantic clusters
- Four dimensions — X, Y, Z for spatial orientation, plus the similarity score measuring truth in the full 512D space
- AI as collaborator — ChatGPT sees the same map and reasons about it with you
What This Teaches About AI
WordSpace is a window into how modern AI understands language.
Every chatbot, every search engine, every translation system uses embeddings like these. When you ask ChatGPT a question, it converts your words into coordinates and navigates a space much like this one.
Playing WordSpace gives you intuition for:
Why AI sometimes makes strange mistakes. If "bank" (financial) and "bank" (river) sit in overlapping neighborhoods, the AI might confuse them. You can see how a word's multiple family memberships create ambiguity.
Why context matters so much. A word's meaning isn't fixed — it depends on its neighbors. The embeddings capture this, which is why "cold beer" and "cold personality" activate different regions of meaning.
Why AI is both impressive and limited. The geometry of meaning is real, and the machine found it. But the map is compressed, approximate, and sometimes misleading — an antonym can look like a synonym. Understanding this helps you understand AI's capabilities and its failures.
The Shared Map Between You and AI
Here's something unique about WordSpace: you don't play alone. ChatGPT is your advisor. It sees the same 3D widget you do — all your guesses with their positions and percentiles — and reasons about semantic neighborhoods to suggest your next move.
This makes the projection doubly important. It's not just a visualization for you — it's the shared representation between human and AI. Because the coordinates are semantically meaningful (distance from center = game relevance, spread = semantic variety), ChatGPT can reason about the space coherently. It can see that your guesses are clustering in one family and suggest exploring a different one.
The game becomes a collaboration: you bring human intuition about word meaning, and the AI brings its map of the full 512-dimensional space. The 3D visualization is where those two perspectives meet. This is the feature that distinguishes WordSpace from every other semantic word game — not just showing you the space, but giving you an AI partner who can see it too.
Playing as Learning
The best way to understand semantic space is to get lost in it.
Guess wildly at first. See where things land. Notice which words neighbor each other and which sit far apart. Pay attention to surprises — the words that land somewhere unexpected may teach you the most.
Over time, you'll develop intuition. You'll start thinking in families: "I'm in the animal neighborhood, but I need something more domestic." You'll learn to triangulate: "These three guesses are all 99.4% - 99.7% — the answer must be somewhere between them."
This intuition transfers. Next time you use a search engine and it returns strange results, you'll have a sense of why. Next time an AI translation seems off, you'll understand the ambiguity it faced.
WordSpace is a puzzle game. It's also a tutorial in how machines understand meaning.
Welcome to semantic space.
Going Deeper
Want to understand the technical details?
- How We Project Words to 3D — Our mystery-aligned projection
- Why 512 Dimensions? — Embedding dimension research
- Vocabulary Design — How we selected and organized words