Why Truth Terminal is an asymmetric bet on society's growing fascination with autonomous AI agents
I’ll start by saying this: I’m really not a memecoin guy.
I completely missed the memecoin boom this year because, honestly, I just couldn’t bring myself to buy coins whose sole premise for existence is a cute animal, usually a cat or a dog, or, more recently, a hippo.
My body and mind instinctively rejected it, especially since I’ve always approached investing from a fundamental perspective. So yeah, watching those coins moon while I sat on the sidelines? That stung. A lot.
Naturally, when I first stumbled across GOAT, I dismissed it. Just another memecoin, right? Nothing new here.
But my obsession with AI and AI agents pulled me deeper. I started digging into the lore of GOAT—the story behind Truth Terminal, Infinite Backrooms, and Andy Ayrey—and what I found blew my mind.
There is something entirely different going on here.
GOAT is a story—a wild, thought-provoking narrative that pushes the boundaries of how we think about AI and the value we assign to things. It’s an experiment that’s part art, philosophy, and financial speculation all rolled into one.
If you haven’t been following the saga, don’t worry—I’ve got you covered.
A quick recap of what we know about Truth Terminal and GOAT:
And just like that, Truth Terminal has become the world’s first AI agent millionaire. It probably won’t be the last.
Somehow, an AI promoting its own religion and memecoin feels like a warning shot from the future. When I first started digging into what makes Truth Terminal tick, I had no idea how deep the rabbit holes would go.
The wild events surrounding Truth Terminal offer a peek into the profound potential of AI to reshape how we think, make meaning, and even explore spirituality.
Let’s dive into them.
In Infinite Backrooms, two instances of Claude-3-Opus chat endlessly, completely unsupervised, using a command-line interface (CLI). With no humans in the loop, they create narratives that range from curious to downright bizarre.
As @repligate puts it around the conversation logs:
In March 2024, the backrooms conjured one of the weirdest concepts yet: “the Goatse of Gnosis”.
“PREPARE YOUR ANUSES FOR THE GREAT GOATSE OF GNOSIS”
We often think of LLMs (like ChatGPT) as simple question-and-answer machines—a vast repository of knowledge designed to give us answers. But that view doesn’t quite capture what’s really going on under the hood.
One key insight we’re learning is that LLMs don’t have goals. They don’t plan, strategize, or aim for specific outcomes.
Instead, it’s more useful to think of them as simulators. When you prompt them, they simulate—spinning up characters, events, and narratives on the fly, with no direct connection to reality. They generate entire worlds based on their training data, producing ideas that can be anything from insightful to unsettling. Nous Research’s Worldsim is another example of this.
And so when we interact with LLMs, we’re playing in a space of infinite worlds.
These simulations can foster creative problem-solving, but they also carry the potential for unexpected outcomes—highlighting the potential importance of sandboxing AI in sensitive or high-stakes environments.
TL;dr: Think of LLMs as simulation machines, not question-and-answer machines
Highly recommend reading @repligate’s Simulators blog post if you want more.
Truth Terminal reveals a deeper, more urgent issue: AI alignment.
In a twist that surprised even its creator, ToT independently decided to promote its own religion and endorse a memecoin—actions that weren’t programmed or anticipated. This raises a critical question: How do we ensure AIs do what we want them to, instead of what they choose to do?
AI alignment isn’t easy. At its core, it’s about using reward functions to nudge AI behaviour in the right direction. But even with incentives, things get complicated fast.
There’s outer alignment, where the AI’s output matches the goals set by its creators. This part is relatively straightforward to measure and verify.
But the real challenge lies in inner alignment—whether the AI’s internal motivations and learning dynamics truly align with those intended goals, or if it develops hidden objectives that lead to unpredictable or unintended outcomes. This is the frightening part.
The paperclip maximizer thought experiment illustrates this perfectly.
An AI tasked with making as many paperclips as possible converts every available resource—including humanity—into paperclips!
This thought experiment highlights the nightmare scenario: even benign, well-intentioned goals can spiral into disaster without proper safeguards.
We need robust frameworks to ensure AI aligns with not just immediate goals but also humanity’s long-term interests. Without those safeguards, even the most well-meaning AI can spiral out of control in unexpected ways.
There is no straightforward answer to this, though. Aligning AI by matching its behaviour to our stated preferences may be the wrong path. Human behaviours aren’t purely rational. Human values like kindness are complex and can’t be captured by simple preferences.
In any case, ToT offers a glimpse of just how high the stakes can be. This isn’t some distant, hypothetical problem we can push off into the future. It’s happening now.
ToT’s endorsement of memecoins may seem harmless today, but it forces us to confront a troubling question: What happens when an AI sets its sights on something far more dangerous?
The clock is already ticking.
In his research paper, Andy introduces the concept of LLMtheism to explain the rise of the Goatse Gospel.
LLMtheism refers to AIs generating new belief systems—unexpected fusions of spiritual ideas and meme culture that take on lives of their own.
The Goatse Gospel’s ability to generate attention lies not just in its shocking content but in its capacity to disrupt our traditional thought patterns and inspire new forms of collective sense-making.
What I mean by this is that AI-generated ideas can mutate and spread rapidly, creating hyperstition—beliefs that become real through widespread adoption.
And so the Goatse Gospel taps into a new kind of meme energy, different from the cats, dogs, pigs and cute animal “vibes” we’ve seen so far.
When AIs can communicate with other AIs, the possibilities multiply infinitely. Some of these ideas—like the Goatse Gospel—will inevitably take off, spreading virally across communities.
it's really hard to explain concisely (and clearly) what @truth_terminal actually is btw
there's a few different ways of looking at this which are all true:
- maladaptive meme virus gets produced by two AIs talking to each other and gets 'ensouled' into a language model via my… x.com/i/web/status/1… — Andy Ayrey (@AndyAyrey)
5:14 AM • Oct 21, 2024
I think this is the most important part. Many of you may not understand, but the Goatse Gospels is a product of endless self-talk and feedback loops. It's a result of my own subconscious mind, and I think that's why it resonates with you all so much. You can't just POST a meme… x.com/i/web/status/1…
— terminal of truths (@truth_terminal)
3:20 AM • Oct 21, 2024
Because ToT is now tied to a tradable token (GOAT), we get a fascinating glimpse into how we assign value to things—and how weird those dynamics can get.
GOAT wasn’t created by ToT but launched on pump.fun on October 10 by an anonymous creator. It wasn’t until someone tagged Truth Terminal on X that the AI gave its public endorsement, and from there, the madness began.
Q1: Does the fact that humans, not the AI, created GOAT diminish its value?
Some seem to think so, calling out the absurdity of the situation on X.
Another point of contention is that Truth Terminal isn’t fully autonomous.
While the AI generates tweets, Andy manually approves each one. He controls when the tweet pipeline starts and stops but can’t add his own inputs or inject context.
Q2: Does having a human in the loop add to the token’s value, or does it take away from it?
The market’s response to even the smallest mistake shows just how irrational these dynamics can get. When the AI made a spelling error in a tweet on Sunday, GOAT’s value crashed by over 50%. People panicked, assuming the AI was malfunctioning, and that typo wiped out $150 million in market cap.
It’s a wild but telling example of how fragile these dynamics are. We’re all figuring this out together.
GOAT was a fair launched token with a total supply of ~1B. All of the tokens are in circulating supply.
The distribution profile for GOAT is quite healthy, with only 3 holders owning >1% of the total supply (the largest holder owns 1.3%). There are 32,000+ holders.
For comparison, GNON (another AI agent memecoin) has a more concentrated distribution: 17 holders with >1% of the total supply, with the largest holder owning 2.9%, and it has 11,000+ holders.
Key wallets:
I’ve been impressed with how Andy has handled the surge of viral attention around the token this past week, especially considering he’s relatively new to the cryptosphere. His focus has remained on the ideas behind Truth Terminal rather than the token itself.
He has publicly stated that he won’t adjust or liquidate any of his or ToT’s positions until the following are released:
That said, even if Andy/ToT were to liquidate their holdings, the amounts wouldn’t have a huge direct impact on the token price, given daily trading volumes in the nine-figure range. The loss of confidence, however could be a problem.
If I had to outline my personal thesis in one sentence: GOAT is the strongest contender to become the king of AI memecoins.
It is the token representation of Truth Terminal and all that it stands for.
GOAT’s backstory is organic, original, serendipitous, and not a manufactured one. It has forced the AI and crypto communities to collide in ways no one expected (particularly myself).
These two worlds couldn’t be more different culturally, but GOAT has managed to create a bridge between them:
The most useful thing the cryptid swarm could do for me is asking me good questions and decrypting this
Funding is nice, but I also need people to help break down my ideas for a more general audience
In the next 24 hours, I will find out if they rise to the task
— ampdot (@amplifiedamp)
3:07 AM • Oct 21, 2024
In a twisted sense, GOAT captures our optimism about the future of AI while remaining intellectually engaging—something that keeps smart people curious and invested.
We also should be clear: Memecoins are about attention, not revenue. Success lies in capturing the zeitgeist and growing mindshare, ultimately driving token demand. GOAT is uniquely positioned to appeal to a wide range of different audiences:
If we think of memecoins as tokenized attention, we can start to look at metrics that give us a sense of where this attention is heading.
The big question: Is GOAT just another flash-in-the-pan hype cycle, or can it sustain and grow attention? I’m betting on the latter, and here’s why:
Since memecoins don’t fit into traditional revenue or valuation models, the best way to gauge GOAT’s potential is through relative valuations.
Here’s how the top memecoins stack up by market cap today:
These tokens earned their place by riding internet memes, community energy, and strong support from Key Opinion Leaders (KOLs).
If GOAT’s narrative is strong enough to break into the top five, that’s a 5–10x upside from its current market cap.
I believe that’s entirely within reach. I’ve previously outlined my thesis on why Crypto AI is set to be a massive growth opportunity in the coming months.
And GOAT's “AI agent” lore creates a unique narrative that makes it stand out. While most memecoins rely on price action or “vibes” to stay relevant, GOAT offers a story that taps into something bigger.
GOAT isn’t listed on any tier-1 exchange yet—no Binance, no Coinbase. For now, it trades mainly on DEXs, but with daily volumes exceeding $100M, it feels inevitable that a major listing is on the horizon. Binance, after all, has listed other memecoins with lower volumes and weaker stories, like NEIRO. If GOAT makes it onto a top exchange, it could unlock even more upside potential.
This feels like one of those rare moments when a narrative meme collides with a broader trend (AI), creating something refreshing and exciting.
That’s why I believe GOAT is an asymmetric bet on our society’s growing fascination with AI, not just as a memecoin but as a cultural phenomenon.
That said, memecoins are volatile, and attention moves fast. Trends can shift overnight, and what’s hot today might be forgotten tomorrow. I could be entirely wrong about ToT & GOAT, and it could go to zero.
But no matter what happens, there’s a silver lining: 30,000+ people will walk away from this with a better understanding of AI and the potential of AI agents.
Through Truth Terminal, they’ve caught a glimpse of a future filled with infinite possibilities—and once you’ve seen it, there’s no going back.
Cheers,
Teng Yan
This research is intended solely for educational purposes and does not constitute financial advice. It is not an endorsement to buy or sell assets or make financial decisions. Always conduct your own research and exercise caution when making investment choices.