Why career pivoters are the teachers this field actually needs
I sent AI agents out to evaluate whether a conference program committee would accept me as a speaker. Both came back with the same verdict: decline. The reason: "This looks like resume-pivoting to catch the AI wave."
They were right. I am pivoting. That is the whole point.
Every AI engineering conference I can find is built for developers who want to become good LLM developers. The tracks assume you already know what a try/catch block is. You already know why you would split a monolith. You already know what "architectural taste" means because you spent ten years acquiring it.
That is fine. Those people need a conference too.
But the next wave of AI engineers is not coming from CS programs. It is coming from people like me — domain experts in other fields who are smart, who understand systems, who always wanted to build things but could not get over the friction of learning to code the old way. AI has dropped that friction to nearly zero. For the first time, someone who thinks in systems but never wrote a for loop can build real infrastructure.
There is no on-ramp for that person. I looked. It does not exist.
So I am building it. And I am sharing it behind me as I build it in front of me.
In 1990, a Stanford PhD student named Elizabeth Newton ran an experiment. She asked people to tap out the rhythm of a well-known song on a table. Then she asked them to predict how often listeners would recognise the song. Tappers predicted 50%. The actual rate was 2.5%.
The tappers could hear the melody in their heads. They could not turn it off. They literally could not imagine what it sounded like without the melody — just knuckles hitting wood.
That is the curse of knowledge. Once you know something, you lose the ability to reconstruct what it was like not to know it. The confusion becomes inaccessible. You cannot fake it. You cannot "try to remember." The neural pathways that encoded the confusion have been overwritten by competence.
Every senior developer in the room has this. They have earned it. And it makes them the wrong person to build the on-ramp for the people coming next.
Six months ago I was a comms guy in DeFi. I understood tech at 30,000 feet — I translated engineer-speak into human for a living. But I could not ship code. I had tried. Multiple times. Boot camps, tutorials, weekend projects. I would hit a wall, get frustrated, drop it.
Then I started building with Claude Code. Not just prompting it — actually building systems with it, every day. Multi-agent orchestration. A 32-server development ecosystem. A slash command that teaches me senior dev sensibilities in real time. 21 filed patents on AI-accelerated methodologies.
I am not a senior dev. I am becoming one. And I can still see both sides of the gap — the confusion and the clarity. I can still remember what it felt like to not know why a monolith-first approach matters, because I learned it six weeks ago. I can still remember the exact moment the concept of "context impedance matching" clicked, because it was this morning.
That window closes. Once I have the sensibility, I will not be able to reconstruct the confusion. This perspective has an expiration date.
There is a finding in educational psychology called the protégé effect. People who teach material retain it better, understand it more deeply, and find their own gaps faster than people who just study it. The act of explaining forces you to organise what you know — and the gaps become obvious because you cannot explain across them.
I am not teaching despite being early. I am teaching because I am early. This is not altruism. It is the optimal learning strategy. I share what I learn the day I learn it, while the confusion is still fresh and the breakthrough is still vivid.
If I wait until I am good enough to teach, I will have lost the thing that makes the teaching useful.
I created /senior — a Claude Code slash command that I call after any AI-generated code I do not fully understand. It reads what just happened and teaches me in two beats:
Beat 1 — The Parts. Name the concept. The moving pieces, how they connect, the vocabulary I need to discuss it with engineers. No code. No syntax. Just the mechanism.
Beat 2 — The Sensibility. The senior dev reframe. The trade-off, the consequence, the non-obvious thing that makes you think: oh, that is why this matters. This is the part that takes ten years to pick up on the job. I am compressing it.
The knowledgebase behind it includes nine mental models (Pareto, Conway's Law, Type 1 vs Type 2 decisions), a full-stack vocabulary map, Anthropic's agent pattern taxonomy, Dex Horthy's 12-Factor Agents, and a vibe coding awareness section that covers the 80/20 cliff — the point where AI-generated code starts costing more than it saves.
It is not a boot camp. It is not a course. It is a mentor that catches me in the moment, during real work, and installs the one thing AI cannot give you: judgment.
The conference world has plenty of talks by senior engineers explaining their craft to other senior engineers. What it does not have is someone standing in the middle of the bridge, looking both directions, describing what the crossing actually looks like.
I want to give that talk. Here is what it covers:
/senior lens. What a non-dev sees versus what they need to see. The gap, live.This talk has an expiration date. It can only be given by someone in the middle of the transition, not after. I am in the middle right now.
I am Clayton Roche. I build AI systems from the other side of the stack. Before this I ran comms and community for UMA Protocol at Risk Labs, co-founded DeFi Nation, and gave keynotes at ETH Denver and Avalanche Summit. I live in Cebu, Philippines. I am documenting the zero-to-senior-dev journey in public because that is how the journey works.
If you are building education for the next wave of AI engineers — the ones who are not developers yet — I would like to talk.
Email: [email protected]
Twitter: @TokenArchitect