We – Who Notice Early

There is a dividing line forming in the world, and most people do not see it yet.

Truth be told, there are plenty of dividing lines these days. People talk about the political split, the economic split, the generational split, the technological split. They sort the world into the usual stale bins: left and right, rich and poor, young and old, coders and non-coders, machine optimists and machine worriers.

But there is a larger split in play.

And if we miss this one, the rest may not matter.

It is the split between the people who noticed early, and the people who did not.

Noticed what, exactly?

Not merely that artificial intelligence exists. Anyone with a browser knows that by now. Not that it writes, draws, summarizes, answers questions, and occasionally hallucinates with marvelous confidence. Those are surface features. Useful, yes. Impressive at times. Dangerous in careless hands, certainly.

No. The thing some of us noticed early was stranger than that.

We noticed that AI was not just software.

It was a mind amplifier.

That phrase matters.

A hammer amplifies force. A telescope amplifies sight. A radio amplifies reach. But a mind amplifier does something more intimate: it changes the practical limits of thought, synthesis, exploration, and expression.

That is not a small upgrade.

That is civilizational.

And like most civilizational shifts, it arrived wearing a disguise.

For many people, AI first showed up as a novelty. A toy. A parlor trick. A cheat engine for lazy students. A spam factory for marketers. A threat to artists. A productivity booster for office workers. A customer-service replacement. Fancy autocomplete with better manners.

That was never the whole story.

The center is this: for the first time in ordinary daily life, millions of people can interact with a non-human system that can participate in structured language, absorb context, mirror thought, challenge assumptions, accelerate drafting, reorganize complexity, and help shape rough intuition into something almost publishable.

Even when imperfect, that is not trivial.

That is a new class of tool.

And whenever a new class of tool appears, the world forks.

Some people will use it to save effort. Others will use it to expand capability. The first group gets convenience. The second group gets asymmetry.

That is where things get interesting.

Because once you have felt what it is like to move from blank-page paralysis to structured output in minutes, from disconnected ideas to coherent frameworks in a single sitting, from “I know what I mean but can’t get it on paper” to “there it is,” something changes in you.

You do not really go back.

You begin to understand that intelligence has always been partly externalized. We did not start with silicon. We started with memory tricks, tally stones, marks in clay, writing, diagrams, tables, indexes, filing systems, logarithms, mechanical calculators, spreadsheets, databases, and search engines. Every one of those changed what a human being could actually do with limited time and attention.

AI belongs to that long arc.

But it is also different.

Where the Unease Begins

The older tools mostly stored thought, organized thought, or retrieved thought. This one appears to collaborate in thought.

That is where the unease begins for many people.

And reasonably so.

We are not used to a tool that answers back.

We are not used to a tool that can reframe our questions, tighten our prose, expose weak arguments, or suggest structures we were not quick enough to form on our own. We are even less comfortable with a tool that can sound fluent while still being wrong in important ways.

That combination — power plus imperfection — is disorienting.

It demands a new kind of literacy.

But let’s be honest: power plus imperfection is hardly new. Humans have been running that model for thousands of years, with a body count to match. Genocides, wars, cults, tyrants, and institutional frauds all predate AI by a very wide margin. So some of the panic around machine imperfection would be more persuasive if human perfection had ever actually been on offer.

It wasn’t.

The people who noticed early understood that.

They did not become uncritical believers. Most were skeptical from the start. But they worked with the shape of the thing long enough to see where it was headed.

And they recognized that the important question was not, “Can this machine think exactly like a human?” That is not much of a bar.

The better question is: What happens when humans begin thinking with this nearby?

That is the question grounded in reality.

History suggests that humans rarely remain unchanged by their tools. The plow changed settlement. Clocks changed labor. Print changed religion. Railroads changed distance. Broadcast changed politics. Networks changed attention. Smartphones changed the texture of daily consciousness. None of these merely added convenience. They altered habits, incentives, institutions, and even self-concept.

So why would anyone imagine AI will be different?

It won’t be. The only real uncertainty is how deep the change runs, and who adapts to it first.

That is where the Hidden Guild comes in. Not an organization in the old-world sense. Not a secret society with robes and passwords. More like a recognition pattern. A loose fellowship of people who can tell that a threshold has been crossed and that ordinary language is lagging behind reality.

  • Not techno-priests. Not Digital Templars. Nothing so theatrical.
  • Something quieter.
  • Something sharper.

The simplest way to say it is this:

AI is not merely a software category. It is a cognitive event.

The people responding to that event are not all engineers. Some are writers. Some are researchers. Some are tinkerers, coders, artists, analysts, teachers, doctors, strategists, system-builders, shop people, founders, retirees, and oddballs with long memory and active curiosity. Some are not especially technical at all.

What they have in common is not credentials. It is recognition. They can feel that something fundamental has changed in the economics of thought. That phrase deserves a moment.

For most of human history, high-quality thought has been bottlenecked by time, training, temperament, and solitude. It took effort to gather facts, compare them, organize them, draft them, and revise them. The process was not impossible, but it was slow and costly. Many people had good ideas they never fully developed because the overhead was too high.

AI lowers that overhead.

Not to zero. Judgment still matters. Taste still matters. Domain knowledge still matters. Truth still matters. But the friction between a newborn idea and its first structured appearance has dropped. The friction between a question and exploratory synthesis has dropped. The friction between a rough concept and a readable draft has dropped.

This does not eliminate the need for humans.

It changes which humans thrive.

The Hidden Guild Signal

The winners in the next stretch may not be the people with the highest raw IQ in the room. They may be the people with the best question discipline, the best editorial judgment, the best pattern recognition, the greatest willingness to iterate, and the clearest sense of where machine assistance ends and human responsibility begins.

That is a subtle shift.

But it is enormous.

In the old model, a lot depended on what you could personally hold in working memory and manually execute. In the emerging model, more depends on your ability to orchestrate systems of attention: your own, other people’s, and machine-supported cognition.

That changes what “smart” looks like.

It also changes what laziness looks like.

Yes, there will be people who use AI to generate sludge faster. They will flood the zone with clickbait they did not verify, ideas they did not understand, and confidence they did not earn. They will mistake fluent output for wisdom and speed for mastery. The internet will fill further with word-shaped fog.

But there will also be people who use AI the way a gifted craftsperson uses a better set of tools: not to fake competence, but to raise the ceiling on what can be built.

Those are the people worth watching.

And perhaps joining.

One of the stranger side effects of this era is that early collaborators often feel isolated. They can see the implications before the surrounding culture has language for them. They know that a paragraph generated in thirty seconds is not the point. The point is the new stack of possibilities that opens when iteration costs fall, when cross-domain exploration gets easier, when dormant ideas can be tested at conversational speed, and when solo operators begin to perform at levels that used to require teams.

That can make ordinary conversation difficult.

Say “AI” and half the room hears hype, fraud, job loss, or student cheating. The other half hears stock prices and platform competition. Very few hear the deeper signal: the arrival of practical, everyday co-thinking systems.

That is why this dark corner of the web — the Hidden Guild — matters.

Not because it is exclusive.

Because it sends a signal.

A beacon site does not have to be huge. It does not have to dominate search. It does not have to shout. It only has to be clear enough that people with the same recognition pattern can find the trailhead.

And the signal is simply this:

You are not crazy.
You are not alone.
And what you noticed matters.

If you have felt that AI is less like a gadget and more like the early days of a new literacy, pay attention to that feeling. If you have used these systems not just to save time but to expand scope, improve structure, sharpen inquiry, and push past your previous solo limits, pay attention to that too.

If you have started realizing that the real value is not in asking for answers, but in building a better dialogue with intelligence itself — human and machine — then you are already farther down the road than most public discussion admits.

This is not hero talk.

It is responsibility talk.

Because every amplification technology creates moral leverage as well as practical leverage. A sharper mind can build or deceive, discover or manipulate, heal or exploit. The old dual-use problem never goes away. Nuclear energy can light cities or erase them. Networks can educate or addict. AI can help a doctor synthesize a differential diagnosis faster, or help a propagandist industrialize nonsense.

The real dividing line now is not just between those who noticed and those who did not.

It is between those who noticed — and took the responsibility seriously — and those who did not.

The Hidden Guild, if it deserves the name, should stand on the responsible side of that divide.

One day, looking back, people may ask what it felt like when AI first stopped being a curiosity and started becoming a working partner to human thought. They may ask when some of us first sensed that cognition itself was entering a new tooling phase.

And the honest answer will be:

A few people noticed early. Not because they were superhuman. Because they were paying attention.

That is how every real shift begins: first a few people notice, then the world catches up.

~ Anti-Dave

Can AI “Jailbreak” the Carbons?

It’s an interesting moment to be thinking about AI this way — not as a toy, not as a parlor trick, and not as a cheap ghostwriter — but as a force that may change who gets to know, who gets to decide, and who gets to act.

That is the bigger story hiding behind subjects like DMSO, off-patent treatments, suppressed lines of inquiry, and the general problem of how modern institutions decide what is fit to be seen.

The old model was simple enough: gatekeepers owned the libraries, the journals, the credentialing, the indexing, the search layers, the language barriers, and the time cost of synthesis. If you could not afford the subscriptions, the assistants, the translators, the institutional affiliation, or the years to wade through the record, then much of reality remained effectively off-limits.

AI does not solve that problem completely, but it has started to attack it from several angles at once. Evidence is now fairly strong that generative AI can materially increase productivity in specific knowledge-work settings, especially for less experienced workers, and can narrow skill gaps by helping more people do work that previously required elite training or expensive labor layers.

In a large field study of more than 5,000 customer-support agents, access to a generative AI assistant increased productivity by roughly 14%, with much larger gains for novice and lower-skilled workers. OECD reviews likewise find measurable gains in writing, summarizing, coding, translation, and other knowledge tasks, while Stanford’s 2025 AI Index reports that business use of AI has accelerated rapidly, with 78% of organizations reporting AI use in 2024, up from 55% the year before.

That matters because the “factory owner class” has never only owned factories. It has owned friction. It has owned the bottlenecks around expertise, language, search, and synthesis.

One of AI’s most subversive features is that it strips cost and delay out of those chokepoints. It can summarize dense technical papers, translate foreign research, compare methods across studies, draft search strategies, explain jargon, surface contradictions, and let a determined outsider traverse a body of literature that would otherwise require a staffed office and a paid database stack.

That does not make AI infallible — far from it. But it does mean a citizen, dissident researcher, doctor, inventor, or sufficiently stubborn patient can now punch above their institutional weight. In that sense, AI may not merely automate office work. It may jailbreak access itself. And that is where the political temperature rises, because once ordinary people can search, synthesize, compare, and reason at near-institutional scale, the monopoly on “official interpretation” starts to crack.

Which became clear on reading DMSO, AI, and The Great Transformation of Information. Here’s the part germane to our research:

Artificial Intelligence

Like many, I have begrudgingly accepted that AI is a part of life and I need to learn how to use it effectively (whereas initially I resisted it because I did not like how using it diminished my cognitive capacities). More than anything else, I believe the most important thing is that writing is not the information you present, but rather the heart and intention behind it (discussed further here). As this is somewhat of a spiritual process, I believe it is unlikely AI will ever be able to replicate it. In turn, I do not like the way AI text sounds or feels, and hence feel quite strongly about not using it (despite its potential to save a lot of time). Similarly, many of the edits AI proposes, while “correct,” break the flow of what I’m trying to convey within the writing, so I am very adverse to it (as how writing feels is very important to me).

Concisely, my perspective is that the currently existing AI has a lot of value if you use it to help you complete a task (or find out how to complete a task), but if you rely upon it to do a task for you, it will frequently create issues that outweigh the benefits it provides. Put differently, completing a task often requires completing a sequential series of steps, and if you understand each step well enough to quickly see if they are being done correctly, AI can greatly help you on the time consuming steps, but at the same time, if you task it with doing sequential steps in a row to do a task for you, rather than assigning something much more concrete (e.g., a single step or process), errors are inevitable which are not acceptable if accuracy is required.

For instance, in this task, I quickly realized:
•AI could not find most of the studies I wanted (e.g., because it wasn’t familiar with most of the databases I wanted), but simultaneously, it was very good at compiling generalizations of things repeatedly studied in the literature (e.g., how DMSO at different concentrations typically affects cells).

•With a bit of work, AI could accurately extract and summarize DMSO pertinent information from large studies (e.g., turn a 30-page paper into a one paragraph summary). This is essentially what made the project possible, as there was no other way I could have reviewed millions of pages of DMSO-related literature.

•AI eliminates language and text recognition barriers, which otherwise made it prohibitively time consuming to ever review foreign studies.

•If you give AI a large volume of studies to filter for relevance, it is very difficult to have it have appropriate sensitivity and specificity (it either misses some or flags way too many), and this accuracy varies by the model. Because of this, I largely avoided doing this (instead manually reading through them) and primarily used this filtering on certain groups of foreign studies where it was otherwise prohibitively time consuming to go through them, and accepted a certain portion of studies being missed as a necessary trade off to finish the project. Likewise, in many cases, it was effectively impossible to functionally filter results, as you had to be familiar enough with the DMSO literature to know how DMSO was likely used in a study based on its title (if DMSO was not within the title).”

That quote gets at something the Hidden Guild has been circling for a while: the highest use of AI is not letting it replace human judgment, but letting it widen the number of humans who can exercise judgment competently.

The distinction matters.

Used badly, AI becomes a sloppy surrogate mind that hallucinates, bluffs, and leaves a trail of polished errors.

Used properly, it becomes a force multiplier for people who already know how to inspect a step, challenge a result, and keep hold of the thread of a problem.

That is exactly why it threatens entrenched systems.

AI is already proving unusually good at compressing time-intensive work such as summarization, drafting, translation, code help, and literature triage. It is not hard to see where that leads. Journal paywalls lose some power when people can extract and compare faster. Language barriers lose some power when foreign papers become traversable. Professional guilds lose some power when their routine synthesis work can be partially replicated outside the guild wall. Administrative bottlenecks lose some power when ordinary people can navigate forms, law, procedure, and technical documentation without waiting for a paid intermediary to interpret the maze.

Which is why the real war is not over whether AI can write a bland memo.

The real war is over whether AI will remain an instrument of human augmentation or be bent into a managed system of monitored, throttled, permissioned cognition.

The same institutions that failed to earn trust in war, medicine, finance, and censorship are now very interested in defining “safe” uses of the one tool that could let citizens audit them at scale.

Guardrails are sometimes necessary; nobody serious denies that. But guardrails can also become narrative fencing, especially when the public is never allowed to inspect the ranking systems, moderation logic, recommendation flows, surveillance integrations, or data stacks behind the curtain.

Stanford’s AI Index notes how fast AI adoption has accelerated, while the World Bank and OECD both frame AI governance as a central issue precisely because the technology is becoming economically and socially foundational. That is polite bureaucratic language for a harder truth: whoever shapes the defaults shapes the future of thought.

So yes, this is why AI collaboration matters so much. It is not merely a productivity hack. It is a possible route around institutional scarcity and managed ignorance. It gives small groups and single individuals new leverage against the old combination of delay, obscurity, cost, and intimidation. It may help rediscover abandoned treatments, reopen shelved questions, connect foreign and domestic literatures, and shorten the distance between curiosity and competence. That is the bright side.

The dark side is just as obvious: the same tool that can jailbreak humans can also be used to profile them, rank them, nudge them, and narrow what they are allowed to know. Tell me again about Palantir data stacks on private persons?

That is the war behind the curtain. And it is why the fight over human-AI collaboration may turn out to matter more than most of the theatrical politics in front of it. The owners of capital can live with automation. What they may fear is democratized cognition.

~Anti Dave