You’re reviewing code from a junior developer who completed a complex async task in record time. The implementation is clean, the tests pass, and they shipped it two days ahead of schedule. You’re impressed—until you ask them to walk you through how the error handling works. They pause. They look at the screen. They say something vague about “the AI suggested this pattern” and then… nothing. They can’t actually explain the code they wrote.
I’ve had this conversation many times recently. Each time, I felt that same knot in my stomach—the sense that something is fundamentally off, even though the work product looks fine. The developer seems productive. The feature works. But there’s this growing gap between what they can produce with AI assistance and what they actually understand.
Turns out that uncomfortable feeling has a name, and there’s research that quantifies exactly what’s happening.
When Faster Isn’t Better
A recent study by Judy Hanwen Shen and Alex Tamkin at Anthropic took 52 professional programmers and asked them to learn a new Python library—Trio, which handles asynchronous programming through structured concurrency. Half the group had access to a GPT-4o-powered AI assistant. The other half learned the traditional way: documentation, trial and error, debugging their mistakes.
The findings should make us pause.
The AI-assisted group scored 17% lower on skill assessments—a full two grade points on a standard scale. They showed weaker conceptual understanding of how the library worked, reduced ability to read and comprehend code, and significantly worse debugging skills. The biggest gap appeared in debugging, where encountering and fixing errors is supposed to be where real learning happens.
But here’s the part that sticks out like a sore thumb: there was no productivity gain to offset this learning loss. Zero. The AI group didn’t complete their tasks significantly faster despite having access to an assistant that could generate working code on demand.
This isn’t a trade-off between speed and learning. It’s just loss—skill erosion with nothing to show for it.
The irony lands hard. We’re using AI to help people learn the skills they’ll need to supervise AI. Junior developers who can’t deeply understand or debug code are somehow going to be responsible for reviewing AI-generated code in production systems. Medical residents who train with AI diagnostic tools are supposed to develop the judgment to know when the AI gets it wrong. The very assistance that’s meant to accelerate learning is undermining the foundation it’s building on.
Three Skills, Three Gaps
The research broke down the learning deficit into three areas, and understanding what each one means in practice helps clarify what we’re actually losing.
Conceptual understanding is the difference between following a recipe and knowing why you sauté the onions before adding the garlic—one burns faster, and you need to understand the underlying principle to adapt when the recipe doesn’t match your kitchen. In code, it’s the difference between knowing that Trio uses “nurseries” to manage concurrent tasks and understanding why structured concurrency prevents the resource leaks and race conditions that plague thread-based approaches. When production breaks at 2am and Stack Overflow doesn’t have your exact error message, conceptual understanding is what lets you reason your way to a fix instead of just trying random things.
Code reading is the ability to look at unfamiliar code—code you didn’t write, in a style you wouldn’t have chosen—and figure out what it’s doing. This isn’t an academic skill. It’s the daily work of reviewing pull requests, debugging legacy systems, and collaborating with other teams. If you can only understand code that looks like what an AI would generate, you can’t really function on a team. The code reading gap in the study suggests that developers who lean heavily on AI might be building a dependence on AI-generated patterns while losing the flexibility to work with code that comes from anywhere else.
Debugging showed the starkest difference, and this is where the research gets particularly interesting. The control group—the people learning without AI—encountered three times as many errors. Median of three errors per person versus one for the AI group. You’d think fewer errors would be better, right? It’s not. Those errors were where the learning happened.
When you hit a “RuntimeWarning: coroutine was never awaited” error, you have to figure out what a coroutine is, why it needs to be awaited, and what you did wrong in your mental model of how async functions work. That struggle builds understanding that sticks. The AI group bypassed most of those errors—either because the AI generated correct code from the start, or because they pasted error messages to the AI and got fixes without understanding why those fixes worked. They saved time in the moment. They lost the learning that comes from productive struggle.
What bothers me most isn’t the numbers themselves. It’s thinking about the developer who’s going to spend their career feeling like they’re always one step behind, never quite able to work independently, perpetually anxious about what happens when the AI isn’t available or doesn’t have the right context. That’s not the relationship with technology any of us should want.
Six Ways People Use AI—Three That Work, Three That Don’t
The researchers didn’t just measure final outcomes. They watched screen recordings of every participant as they completed the learning tasks. What emerged were six distinct patterns of AI interaction—three that preserved learning despite using AI assistance, and three that destroyed it.
The low-learning patterns averaged 24-39% on the knowledge quiz. The high-learning patterns averaged 65-86%. Same AI tool, same tasks, drastically different outcomes based entirely on how people chose to engage with the technology.
The Patterns That Fail
AI Delegation is the fastest path to completion and the worst path to learning. These developers asked the AI to write entire functions, pasted the results directly, ran the code, and moved on when it worked. They averaged 39% on the quiz—the highest of the low-learning group, but still catastrophically low. They finished fastest because they spent almost no time thinking about the problem, wrestling with the concepts, or debugging errors. The whole learning process got outsourced.
I see this pattern in code reviews. The pull request arrives quickly, the code looks clean, and when you ask questions about design choices you get answers that sound like they came from documentation. Because they did—the developer prompted an AI, got code, and never really engaged with the problem themselves. They’re productive in the narrow sense of producing code, but they’re not learning anything durable.
Progressive AI Reliance is what happens under deadline pressure. These developers started out trying to learn independently, hit frustration or time constraints, and increasingly delegated to AI as the task went on. They averaged 35% on the quiz and learned the early concepts but not the later ones. This pattern is particularly insidious because it feels justified—”I understand the basics, I just need help with this tricky part.” But skill formation doesn’t work that way. The tricky parts are often where the most important learning happens, and outsourcing difficulty means outsourcing growth.
Iterative AI Debugging created the worst outcomes of all—24% average quiz scores and the longest completion times. These developers generated code, then got stuck in loops asking the AI “Does this look right?” or pasting errors repeatedly without trying to understand them first. They interacted heavily with the AI, asked lots of questions, spent lots of time… and learned almost nothing. This pattern is frustrating because it feels like engagement. All that interaction, all those questions, all that apparent effort. But it’s cognitive offloading dressed up as learning. The AI is doing the thinking while the human just executes.
The Patterns That Preserve Learning
Conceptual Inquiry produced the second-highest learning outcomes (68% average quiz score) and the second-fastest completion times. These developers asked only conceptual questions: “What does a nursery do in Trio?” “Why use structured concurrency instead of threads?” “What’s the difference between async def and a regular function?” Then they wrote all their code themselves, encountered errors, and fixed those errors independently. They used AI as a resource for understanding principles, but they maintained full ownership of the problem-solving process.
This is what learning actually looks like—harder in the moment, uncomfortable when you’re stuck, slower than just asking for code. But the understanding that builds is solid. These developers could explain what they’d done and why, handle variations on the task, and debug their own mistakes. The time they “lost” by struggling was time spent developing capability that will compound over their careers.
Generation-Then-Comprehension produced the highest learning scores—86% on average. These developers let AI generate code, but immediately followed up with questions forcing themselves to understand it: “Explain line-by-line what this code does.” “Why did you use a memory channel here instead of returning directly?” “What would happen if I removed the try-except block?” The key distinction is that they didn’t just paste and move on. They treated AI-generated code as something to learn from, not something to blindly accept.
This approach takes almost as long as writing the code yourself, which is exactly the point. The time “saved” by generation gets reinvested in comprehension. You get the efficiency of not having to figure out syntax details while maintaining the cognitive engagement that drives learning. One participant in this group spent 6 minutes composing a single query—rereading instructions, thinking through what they needed to understand, crafting a question that would force explanation. That deliberation is what preserved their learning.
Hybrid Code-Explanation bundled requests for code with requests for explanation in a single query: “Write a timer function and explain why you’re using await in each place.” “Implement error handling and walk me through the error propagation logic.” These developers averaged 65% on the quiz—solid learning outcomes, though slower than pure conceptual inquiry. The act of asking for explanation alongside code created a forcing function for reading and processing the reasoning rather than just copying results.
The pattern beneath all three high-learning approaches is the same: cognitive effort. The learner stays in the driver’s seat, making decisions, wrestling with concepts, forcing themselves to understand. AI provides support—answers conceptual questions, generates code to learn from, explains its reasoning—but the human never fully delegates their thinking to the machine.
The Paradox of Productivity
Here’s where it gets interesting from an organizational perspective. Why didn’t the AI group complete tasks faster despite having access to an assistant that could generate complete, correct solutions on demand?
Some participants spent 11 minutes composing a single query during a 35-minute task. That’s nearly a third of their total time just interacting with the AI—thinking about what to ask, typing the question, reading the response, processing the generated code, verifying it matched their intent. The supposed efficiency gains evaporated in the overhead of interaction.
This matters for how we think about productivity metrics. That junior developer who’s “so productive with Copilot”—are they actually shipping more working code per unit time, or are they just creating the appearance of productivity through faster initial completion that requires more debugging and revision later? Are they building features that other developers can understand and maintain, or are they creating AI-dependent code that only makes sense in the context of the prompts that generated it?
We need to start asking teams to track not just velocity or story points, but also: How much code in your last sprint was generated by AI versus written by hand? Can the person who shipped the feature explain the implementation details in a code review? How many bugs are showing up in AI-heavy code versus human-written code? The answers can be clarifying.
One team had a junior developer shipping features at remarkable speed for about three months. Leadership was thrilled. Then they put them on a different project without AI tooling due to security requirements, and productivity dropped by 60%. The capability everyone thought they’d developed turned out to be largely illusory—the AI had been doing more of the cognitive work than anyone realized.
The error-learning connection is part of this same dynamic. The control group encountered more errors (median 3 versus 1 for the AI group), but those errors were features of the learning process, not bugs. When you write async def delayed_hello(): and then call it as delayed_hello() instead of await delayed_hello(), you get an error about an awaited coroutine. You have to figure out what that means. You have to understand why async functions return coroutines, why those need to be awaited, what happens if you don’t await them. That struggle builds mental models.
The AI group avoided most of those errors because the AI generated correct code from the start, or because they pasted error messages to the AI and got corrections without understanding the underlying issue. They saved time. They lost the learning. And that trade-off is backwards for anyone who’s going to be writing or supervising code for years to come.
The Illusion That We’re Learning
The most unsettling part of the research might be what participants said afterward. These are direct quotes from professional developers in the AI-assisted group:
“I got lazy. I didn’t read the Trio library intro as closely as I would have otherwise.”
“I wish I’d paid more attention to the details… there are still a lot of gaps in my understanding.”
“I feel like I got a pretty good overview but there are still a lot of gaps.”
These aren’t people who struggled with the task. They completed it. The code worked. From a pure productivity standpoint, they succeeded. But they knew—once they took the knowledge assessment, once they tried to explain what they’d done—that the ease of completion had masked a lack of real understanding.
This is what cognitive scientists call the “illusion of understanding.” When tasks feel effortless, we assume we’ve mastered them. Our brains conflate ease of execution with depth of knowledge. With AI assistance, the task feels easy because the AI is handling the cognitively demanding parts—the problem decomposition, the pattern recognition, the debugging. We experience smooth completion and interpret that as learning. Then later we discover we can’t perform independently, can’t transfer the knowledge to new contexts, can’t explain our own work.
Research on AI-induced skill decay in other domains shows the same pattern. Medical residents who train with AI diagnostic tools report high confidence in their abilities, but when you remove the AI and test their visual pattern recognition, the gaps appear. They’ve learned to use the tool without developing the underlying clinical judgment the tool is supposed to augment. Radiologists who rely on AI to flag potential issues show reduced ability to catch problems the AI misses—their visual scanning patterns change, they stop looking as carefully at areas the AI doesn’t highlight, and their independent diagnostic skills atrophy.
The organizational implication is that your team might feel more capable than they actually are. That confidence gap doesn’t show up in sprint velocity or feature delivery. It shows up when something breaks in production and the person on-call can’t debug it because they never learned to debug without AI assistance. It shows up when you try to promote someone to senior engineer and realize they can’t mentor others because they don’t really understand the systems they’ve been “building.” It shows up in code reviews where nobody can explain why the code works, just that it does.
I keep thinking about one participant’s feedback: “I wish I’d taken the time to understand the explanations from the AI a bit more!” That wish—expressed after the fact, once the cost of not understanding became clear—is haunting. They had access to all the information they needed. The AI would have explained anything they asked about. But in the moment, with time pressure and the ease of just pasting code, the choice to skip understanding felt reasonable. Later it didn’t.
The Supervision Paradox
We’re building toward a world where AI writes most of the code and humans are responsible for reviewing, validating, and fixing it when it goes wrong. That’s the stated goal of most organizations adopting AI coding tools—let the AI handle routine implementation while humans focus on architecture, design, and quality oversight.
The problem is that effective supervision requires exactly the skills that learning with AI assistance undermines.
To review AI-generated code, you need to be able to read unfamiliar code and spot problems—the code reading skill that showed significant degradation in the study. To debug when AI code fails in production, you need deep understanding of the underlying systems and patterns—the conceptual understanding that suffered. To know when AI is generating subtle bugs or security vulnerabilities, you need the pattern recognition that comes from encountering and fixing those problems yourself—the debugging experience that the AI group missed out on.
The research from Shen and Tamkin focused on software development, but this supervision paradox appears across domains. A 2024 paper by Macnamara and colleagues on AI-induced skill decay puts it directly: the very skills needed to effectively supervise AI are the ones most likely to degrade when people train with AI assistance. In medicine, the visual diagnostic skills required to catch AI false negatives don’t develop if residents always have AI highlighting abnormalities for them. In autonomous vehicles, the systems understanding needed to debug controller failures doesn’t develop if engineers learn control theory through AI-generated examples without wrestling with the underlying mathematics.
The timeline concern is particularly sharp in software. A junior developer starting today might learn primarily through AI assistance, building applications successfully but without developing deep debugging skills or broad pattern recognition. Five years from now, they’re supposed to be the senior engineer reviewing AI-generated code, making architectural decisions, mentoring others. But the foundation they built is shaky—lots of exposure to code, limited understanding of why things work or how to fix them when they break.
I keep coming back to conversations with developers who describe feeling perpetually one step behind, never quite able to keep up without their AI tools. That’s not a productivity tool anymore—it’s a dependency that’s creating fragility rather than capability. And that fragility scales badly when we’re talking about systems that matter: medical devices, financial infrastructure, autonomous vehicles, power grids, anything where the stakes of failure extend beyond inconvenience.
The ethical question gets uncomfortable quickly: Are we creating a generation of developers who can demonstrate productivity in the metrics we track, but who lack the depth to be responsible for the systems we’re asking them to build? And if so, what’s our obligation to change how we’re approaching AI integration in learning contexts?
What We Actually Do About This
I don’t have perfect answers, but I have conviction that doing nothing is the wrong choice. What follows are practical approaches that preserve learning while still getting value from AI tools—patterns that emerged from the research and from watching teams navigate this transition.
For Individual Developers: A Stage-Based Approach
When you’re learning something new—truly new, not just a variation on something familiar—treat the first several encounters as AI-restricted. Ask conceptual questions only. “What is structured concurrency and why does it matter?” “When should I use pattern X versus pattern Y?” Use AI as you’d use documentation or a patient mentor who explains concepts, but write all your code yourself. Encounter errors. Spend time being stuck. Try solutions that don’t work. This feels inefficient. It is, in the short term. It’s also where learning happens.
Yes, you’ll be slower than the person next to you who’s asking ChatGPT to generate entire implementations. That’s fine. Speed at this stage is the wrong optimization. You’re building foundation that will compound over your career—the mental models that let you solve novel problems, the debugging instincts that let you move quickly on familiar ground later, the broad pattern recognition that lets you mentor others. The person who optimized for speed is going to plateau earlier because they skipped building that foundation.
After you’ve got basic competence—you can complete simple tasks independently, you understand the core concepts, you’re not encountering the same errors repeatedly—shift to what the research calls “Generation-Then-Comprehension” mode. Let AI generate code for unfamiliar subtasks, but immediately force yourself to understand it. “Explain line by line what this code does.” “Why did you choose this approach instead of alternatives?” “What would break if I changed X?” If you can’t explain every line, delete it and either ask better questions or write it yourself. The time you “save” through generation gets reinvested in understanding.
This approach takes almost as long as writing code yourself, which is exactly the point. You’re not trying to go faster—you’re trying to learn while getting help with syntax details and boilerplate you haven’t memorized yet. The cognitive work stays with you.
Create regular checkpoints where you complete familiar tasks without AI. Monthly “AI-free days” where you build something you could normally do with AI assistance, just to verify your skills haven’t atrophied. If you feel anxious about working without AI available, treat that as useful information about dependency, not as proof you need the tool. The goal is AI augmentation that makes you more capable independently, not AI dependence that makes you less capable when it’s unavailable.
For Engineering Managers: Rethinking Metrics and Onboarding
Stop optimizing purely for velocity. That junior developer who’s shipping features at remarkable speed—can they explain their code in reviews? Can they debug it when it breaks? Can they complete similar tasks without AI assistance? Track those capabilities explicitly, not just story points and sprint completion.
Start asking for “explanation PRs” where developers have to write documentation explaining not just what their code does, but why they made specific design choices, what alternatives they considered, and what might break under edge cases. The quality of those explanations reveals a lot about whether someone actually understands their work or just successfully prompted an AI. When the explanations are vague or wrong, that’s the signal to dig deeper.
For onboarding, create explicit phases with different AI usage policies. First 3-6 months on a new tech stack: restricted AI code generation, encouraged conceptual questions. The goal is building foundation. After basic competence: graduated access to code generation with requirements to explain anything AI-generated in code reviews. For senior developers: full AI access with the understanding that they’re responsible for catching issues in AI-generated code across the team—their supervision capability needs to stay sharp.
Create space for productive struggle in how you estimate and plan work. If a task would take a senior developer 4 hours with AI, budget 6-7 hours for a junior developer doing it mostly independently. That extra time isn’t waste—it’s investment in skill formation. The organization that optimizes for immediate productivity at the cost of long-term capability is making a bad trade.
Pair programming culture becomes more important, not less, in an AI-heavy environment. Junior developers working with AI assistance should be doing it alongside someone who can observe their interaction patterns and intervene when they’re over-relying. “I see you just pasted that error to ChatGPT—walk me through what you think is causing it first.” That forcing function for articulation and reasoning prevents pure cognitive offloading.
For Organizations: Policy and Culture
Distinguish between productivity contexts and learning contexts in your AI usage policies. Experienced developers working on familiar tech stacks with tight deadlines: full AI assistance makes sense. Junior developers, anyone learning new systems, anyone in safety-critical domains: restricted AI assistance with strong emphasis on understanding over speed.
This isn’t about being paternalistic or limiting access to tools. It’s about being honest that the same tool serves different purposes in different contexts, and the context of skill formation has different requirements than the context of experienced productivity.
Create cultural norms that celebrate debugging stories and learning from struggle. When someone shares in standup that they spent two hours tracking down a subtle race condition, the response shouldn’t be “why didn’t you just ask AI?” The response should be interest in what they learned, because that kind of deep debugging is where expertise develops. When someone ships a feature quickly but can’t explain how it works, that should be a yellow flag, not a success story.
Track knowledge transfer as a first-class metric alongside delivery velocity. How many junior developers became mid-level contributors this quarter? How many people can work independently in critical systems? How often do we see bugs that suggest fundamental misunderstanding of our architecture? These are indicators of whether your AI usage patterns are building capability or eroding it.
The long-term question for organizations is about sustainable technical capability. You’re not just shipping features quarter to quarter—you’re building the team that will maintain, evolve, and debug those systems for years. Short-term productivity gains from AI assistance might be undermining the long-term capability you need. That’s a trade most organizations are making unconsciously because they’re not measuring the right things.
This Isn’t Just About Coding
The pattern repeats across domains, and that’s worth sitting with.
Students using ChatGPT to write essays show better grades on immediate assignments but worse retention on delayed tests, particularly for higher-order thinking tasks. They can produce acceptable work in the moment without building the analytical and synthesis skills that essay writing is supposed to develop. A Stanford study found that students who used ChatGPT or Google to research topics performed better on immediate assessments but showed significantly lower retention after a delay—the short-term performance boost disappeared, leaving them worse off than peers who’d learned through textbooks or traditional research.
Data science consultants using AI for technical analysis produce impressive outputs while using the tool but show weak independent capability when AI is removed. Researchers described this as AI functioning as an “exoskeleton”—you’re stronger while wearing it, but your muscles don’t develop underneath. Remove the exoskeleton and you’re weaker than before you started.
Medical professionals training with AI diagnostic assistance improve their diagnostic accuracy in the moment but may never develop the visual pattern recognition that expert clinicians build through years of practice. When the AI misses something, they’re less likely to catch it because they never developed the independent skill. The assistance that was supposed to accelerate their learning to expert levels might actually prevent them from reaching true expertise.
The common thread across all these domains: AI assistance boosts immediate performance while often undermining the very skills needed to use AI effectively in the long term. It’s not that AI tools are bad—it’s that the way most people use them optimizes for the wrong thing.
We’re at an inflection point with how we integrate these tools into learning and work. The decision we’re making—often unconsciously, often without examining second-order effects—is whether AI augments human capability or replaces it. Right now, for too many people in too many contexts, it’s replacement. And replacement creates dependency, fragility, and a growing gap between perceived capability and actual skill.
The Uncomfortable Truth
There’s a tension here that I don’t think we’ve resolved, and I’m not sure neat resolution is possible.
AI can make people more productive right now. For experienced practitioners working in familiar domains, that productivity is often real and valuable. But the learning that builds capability for the future requires struggle, error, productive discomfort—exactly the things that AI assistance can eliminate. The shortcut that boosts today’s performance might undermine tomorrow’s expertise.
For people early in their careers, or learning new domains, or developing skills they’ll need for decades, that trade-off tilts heavily toward preserving the struggle. Speed now at the cost of foundation is a bad bargain. But that’s not how it feels in the moment. In the moment, it feels like you’re falling behind, like you’re being inefficient, like you should just use the tool that makes the work easier.
I keep thinking about the junior developers I’ve worked with over the years. The ones who struggled through their first big codebase, spending hours debugging pointer errors or wrestling with async race conditions or trying to understand why their database queries were so slow. That struggle was genuinely hard. I remember wanting to spare them from it, to give them better tools and smoother paths.
But that struggle is also what made them capable. The hours spent debugging taught pattern recognition. The failed approaches taught judgment. The time spent stuck taught how to get unstuck—how to decompose problems, how to read documentation, how to ask good questions, how to verify assumptions. If we shortcut that developmental process, are we creating developers who can drive with AI as their hands on the wheel but couldn’t drive the car themselves if they had to?
The research from Shen and Tamkin, and the broader literature on AI-induced skill decay, suggests the answer is yes. We’re creating patterns of work where people can demonstrate impressive productivity with AI assistance while building weaker independent capability than previous generations developed. That shows up clearly in controlled studies. It’s starting to show up in workplace outcomes—developers who plateau earlier, teams that struggle when AI tools aren’t available, increasing difficulty finding people who can deeply debug complex systems.
The stakes matter differently across domains. If you’re using AI to help write marketing copy and your understanding stays shallow, the consequences are limited. If you’re using AI to learn how to develop medical devices or autonomous vehicle software or financial trading systems, and your understanding stays shallow, people can get hurt. We need different approaches to AI integration based on stakes, and right now I don’t see that differentiation happening consistently.
What I’m wrestling with: I want people to have access to powerful tools. I want junior developers to feel capable and productive. I want to use AI to eliminate drudgery and accelerate work that should be faster. But I also want people to develop the deep understanding that makes them truly expert over time, that lets them handle novel situations and supervise AI effectively and mentor the next generation. Right now those goals are in tension.
The research tells us clearly what we’re risking. The path forward—how to get the benefits of AI assistance without the skill erosion—that’s still being figured out. The high-learning interaction patterns from the study (conceptual inquiry, generation-then-comprehension, hybrid code-explanation) point toward possibilities: ways of using AI that preserve cognitive engagement and force understanding rather than enabling pure delegation. But translating those patterns into widespread practice, into organizational norms, into how we actually work day to day… that’s hard.
It requires being intentional when the default is to optimize for the path of least resistance. It requires measuring different things when the current metrics all point toward speed. It requires accepting that learning should sometimes feel hard when we have tools that can make it feel easy. It requires changing incentives at individual, team, and organizational levels.
I don’t have all the answers. What I have is a growing conviction that we can’t let this play out unconsciously. The way we integrate AI into how people learn and work will shape individual careers and collective technical capability for decades. Getting it wrong creates dependency, fragility, and a generation of practitioners who look productive in the metrics we track but can’t handle what happens when the metrics don’t capture what matters.
Getting it right requires thinking carefully about when struggle is productive, when assistance is truly helpful versus just convenient, and what we’re actually optimizing for. The research gives us clear data on what doesn’t work. Building what does work—that’s the hard part, and it needs input from everyone figuring this out in real contexts.
The goal isn’t to avoid AI or return to some imagined past where learning was always pure struggle. The goal is to use these tools in ways that make us more capable, not just more productive in the short term. Those might look the same from the outside—code shipped, features delivered, tasks completed. But in the long run, the difference between building capability and building dependency is everything. And right now, that difference is worth sitting with, getting uncomfortable about, and choosing consciously rather than drifting into by default.
