If AI handles the entry-level work, how do beginners learn?
ยท by Michael Doornbos
Here’s a question that’s been bugging me: what happens to junior tech workers when AI handles all the entry-level tasks?
I don’t mean the juniors working today. They’ll figure it out. I’m talking about the people who haven’t started yet. The ones who would have spent their first year or two doing precisely the kind of work that AI now handles in seconds.
The struggle was the point
This applies across tech, not just development.
When I learned to code, I wrote terrible code. Embarrassingly bad. I spent hours debugging things that turned out to be missing semicolons. Off-by-one errors. Nested loops that made senior devs wince. I copied stuff from forums without understanding it, then burned days figuring out why it didn’t work.
When I learned ops, I broke things. Took down services with bad configs. Learned to read logs because the system wasn’t going to tell me what went wrong in plain English. Woke up to alerts at 3am and had to figure it out with no one to ask.
When I learned security, I spent hours tracing why a firewall rule wasn’t working. Manually combed through logs looking for patterns. Ran Wireshark captures until I could read packet headers in my sleep.
Frustrating? Absolutely. But here’s the thing: that was the learning.
And security is where this gets scary. AI can generate plausible-looking firewall rules, security group configs, IAM policies. They’ll pass a quick review. They might even be mostly right. But “mostly right” in security means “exploitable.” A junior who’s never manually traced an attack path, never watched a penetration test unfold, won’t catch the subtle gaps. And attackers aren’t using AI to write starter code - they’re using it to find the holes left by people who did.
Every outage taught me how systems actually fail. Every security incident I investigated built intuition that no playbook can replace. Every hour staring at logs built the mental models I still use today.
The shortcut that isn’t
AI is really good at the tasks we used to give juniors.
Generate starter code? Done. Write a Terraform config for a standard setup? Fifteen seconds. Triage alerts and suggest root causes? Trivial. Draft security policies? Write incident reports? Explain what a log entry means? All handled.
For experienced people, this is great. We know what we’re asking for. We can evaluate the output. We understand the bigger system it plugs into.
But what if you don’t know what you’re asking for? What if you can’t evaluate the output because you’ve never done that work yourself?
You’re moving fast, but you’re not learning. You’re assembling outputs you don’t understand. When something breaks - and it will break - you’re stuck.
I’ve already seen this play out. Junior ops engineer asks AI for a Kubernetes config. AI spits out something that looks right. Works in dev. Gets deployed. Two weeks later, the cluster’s behaving strangely under load. The config had resource limits that made no sense for the workload, but nobody caught it because nobody understood what they were deploying. The senior who got paged had to explain not just how to fix it, but what every field in the YAML actually meant. That used to be month-one learning.
Pilots have this problem too
Aviation figured this out decades ago. Modern planes are so automated that pilots struggle when they need to fly manually. The automation is reliable enough that manual skills atrophy. Then automation fails, and pilots don’t have the skills to recover.
I was flying as a safety pilot with a friend in an SR22 while he practiced instrument approaches. As soon as he turned the autopilot off, we were all over the place. I don’t get scared in the cockpit often, but the reality of his flying got me thinking.
The industry’s response? Require manual flying practice. Design automation that keeps pilots engaged rather than passive.
Tech is heading toward the same problem. We just haven’t noticed yet.
What juniors actually learn
The traditional path looks something like this:
Development: Write bad code under supervision. Get reviewed, learn why it’s bad. Fix bugs, understand the codebase. Gradually take on more complex tasks.
Operations: Handle alerts. Investigate outages. Learn how systems fail by watching them fail. Build runbooks from experience. Eventually, design the systems rather than just keep them running.
Security: Monitor logs. Investigate incidents. Learn what normal looks like so you can spot abnormal. Run vulnerability scans, then learn to interpret them. Eventually, design the controls rather than enforce them.
Each step builds on the previous. You can’t respond to incidents well if you’ve never been woken up by one. You can’t architect systems if you’ve never watched small decisions compound into big problems. You can’t assess risk if you’ve never seen a breach unfold.
Skip to AI-assisted everything, and you miss all of it. Five years later, you’re expected to be senior, but you’ve never actually done the work you’re supposed to understand.
It’s not all bad
Look, I’m not saying we should ditch AI tools. That ship sailed. And there’s plenty to like: tedious tasks are faster, documentation lookups are instant, and you can get unstuck quicker.
Maybe I’m wrong about this. Maybe this is how every generation feels about the next. “Kids these days don’t know how to write assembly.” “Kids these days don’t know how to manage memory.” Maybe the new path is just different, not worse. Maybe juniors will learn different skills that matter more - prompt engineering, system integration, AI oversight.
But I don’t think so. The fundamentals haven’t changed. Systems still fail. Attacks still happen. Code still has bugs. When the abstraction leaks - and it always leaks - someone has to understand what’s underneath. If nobody coming up has ever been underneath, we have a problem.
The question is how to use these tools without short-circuiting the learning.
Some ideas
Start without the crutch. First few months in a new domain? Do it manually. Build the mental models. Experience the struggle. AI will still be there later.
Use AI as a teacher, not a doer. Ask it to explain concepts, not do the work. Ask why one approach beats another. Use it to understand what you’re looking at, not to skip past it.
Review AI output like any output. Read it line by line. Understand what it does. If you can’t explain it, don’t use it. This goes for configs, policies, and runbooks just as much as code.
Practice the hard parts. AI handles routine work fine. Spend your time on what it struggles with: system design, debugging complex interactions, incident response under pressure, and threat modeling. Build labs. Break things on purpose. Keep your skills sharp.
Build complete systems from scratch. Not tutorials. Not cloning AI output. Actual projects where you make every decision and hit every wall. Can be small. Ownership has to be total.
The homelab matters more than ever
This is why homelabs matter more now than they ever did. Not because running Proxmox or TrueNAS at home makes you employable. Not for the resume line. Because it’s one of the last places you can break things, fix them, and own the entire process end to end.
Set up a firewall. Configure VLANs. Run your own DNS. Host something publicly and watch the bots find it within hours. Set up monitoring and see what normal traffic actually looks like. Then try to secure it.
Nobody’s going to hand you this experience at work anymore. AI will write your configs. Senior engineers are too busy to walk you through every decision. The homelab is where you build the intuition that no AI can give you - because you have to live with the consequences of your choices.
This isn’t just about individuals
Companies hiring juniors need to think about this, too. Code review matters more now, not less. Mentorship matters more. Shadowing on incidents matters more. “They’ll figure it out with AI” is setting people up to fail in ways that won’t show for years.
We’ve seen what happens when companies eliminate junior roles and only hire seniors. Pipeline dries up. Seniors get expensive. Eventually, no one understands the old systems because no one was trained on them. No one knows why the firewall rules are the way they are. No one remembers what broke last time.
AI could accelerate that problem fast.
The uncomfortable part
Here’s what I keep coming back to: the things that made me competent were almost entirely unpleasant at the time. The frustration of debugging. The tedium of reading logs. The embarrassment of the review. The 3 am texts. The slow accumulation of knowledge through repeated failure.
AI promises to eliminate the unpleasant parts. But what if the unpleasant parts are load-bearing?
I don’t have a clean answer. The tools are here, they’re helpful, they’re getting better. The old path - struggle, learn, improve - is getting disrupted, and we don’t know what replaces it yet.
What I do know: junior tech workers today are senior tech workers tomorrow. Break the path between them, and we’ll be dealing with it for decades.
So what do you actually do?
If you’re starting out right now, here’s my one piece of advice: build something real, without AI, and keep it running.
I don’t care what it is. A web app. A home server. A monitoring stack. Something that can break, that you have to fix, that teaches you what happens when things go wrong. Use AI to learn, not to do. Ask it to explain things. Ask it why. But write the configs yourself. Deploy it yourself. Debug it yourself.
The skills that will matter in five years are the ones AI can’t replace: understanding why systems fail, recognizing when something’s wrong before the alerts fire, making judgment calls under pressure. You don’t get those from prompts. You get them from reps.
The struggle is still the point. It just looks a little different now.
<< Previous Post
|
Next Post >>