A note before you begin. This essay is written for students. But if you're an educator or a parent who picked it up first, that's not an accident. The argument here is one you already sense: that something important is at stake in how young people use AI, that the stakes go deeper than cheating policies and plagiarism detectors, and that the students who figure this out early will be in a fundamentally different position from those who don't.

Hand it to someone who's ready to hear it. Thank you.

The Game You're Playing

Here's something almost nobody will say to you directly: school is a game.

Not in the dismissive sense, not “it doesn't matter” or “just survive it.” In the literal sense. It has rules. It has scoring. It has winners and losers. It has strategies that work reliably and strategies that don't. And like most games worth understanding, the people who win it are almost always the ones who know they're playing it, while the people who lose often don't know a game is in progress at all. They think it's life. They think the scores reflect them.

I've spent decades in and around education. I've interviewed hundreds of teachers, researchers, and reformers. I've talked with thousands of students and watched the institution from more angles than I can easily count. Certain patterns become impossible to miss after a while. One of the clearest is this: the students who win academically, the ones accumulating the grades, navigating the system, landing in the next tier and the tier after that, understand at some level that they're playing a game. They may not be able to say so in those terms. But they've internalized the rules: what teachers want to see, how to structure the essay that satisfies the rubric, which assignments carry weight and which can be minimized, how to appear engaged without necessarily being engaged, and how to signal what the institution is looking for. They've learned the game, and they play it well.

The students who are not winning? They often believe the scores are a direct measurement of who they are. That the grades reflect their intelligence, their potential, their value as people. When they fail the game, they don't think: I've failed the game. They think: something must be wrong with me. I must be defective. I have been weighed and measured and found wanting.

That's not what's happening. What's happening is that they don't know there's a game.

This isn't a personal failing. The game is designed, not by conspiracy but by the accumulated logic of institutions, to look like something else entirely. It presents itself as education: the development of your mind, the honest measurement of your capability, the fair rewarding of your effort and intelligence. And there are genuine elements of truth in that presentation. Some things that happen in school matter. Some teachers are extraordinary. Some classes, some books, some conversations reach students in ways that change them permanently. I don't want to throw any of that away, and I'm not going to pretend the institution is simply a lie. But there's a difference between acknowledging what's real in the system and pretending the system is what it says it is. That pretending is expensive, for you personally, and now more than ever, in the specific context of AI.

* * *

The institution is designed, at its structural core, to sort and credential you.

To be precise: to assign you a position in a hierarchy and provide documentation for it. The grades, the GPA, the diploma: these are signals sent forward to future gatekeepers, telling them where you ranked. The actual learning you do along the way is, from the institution's perspective, secondary. What the system measures is compliance with its own rules. What it produces is a credential. What it's optimized for, at the level of its design, is sorting.

This doesn't mean schooling is worthless. Credentials open doors, and in a society where gatekeepers use them to make real decisions about your life, understanding their value and pursuing them strategically is entirely rational. But it does mean that schooling and learning are not the same thing. And when we treat them as if they are, when we assume that doing well in school means becoming genuinely capable, and that doing poorly means the reverse, we've made a mistake that the institution is entirely happy for us to make.

The adults around you are mostly not lying to you when they say that school matters and that your performance has consequences. They're telling you what they believe, and in many practical respects, they're right. What they may not be telling you, what they may not be able to see clearly from inside the system, is the full picture of what the system is doing and what it can't do for you.

Watch how schools respond to AI over the next few years, and you'll see the institutional logic play out in real time. The policies will multiply. The checklists will appear. There will be approved uses and prohibited uses, disclosure requirements and academic integrity addenda, rubric adjustments, and AI-detection protocols. Some of this is understandable: institutions need rules in order to function, and a technology that can produce a passable essay in thirty seconds is a genuine disruption to the credentialing system. But notice what the response will not include: any serious reckoning with whether students are becoming more capable or less, any framework for helping students develop their own judgment about how to use AI wisely, any honest examination of whether the assignments being protected from AI were producing genuine learning in the first place. The rules will be about protecting the game, not about developing the player. That's not a failure of individual administrators or teachers; it's the predictable output of an institution whose dominant logic is compliance and credentialing. The Game of School will absorb AI the same way it has absorbed every previous technology: by building a fence around it and calling the fence a policy.

* * *

To see that clearly, you need a framework. I've used this one for years because it does more real work than anything else I've found.

There are four different things we routinely call “learning,” and they are genuinely different from each other. Collapsing them causes enormous confusion: about AI, about education, about your own relationship to school. Separating them produces immediate clarity.

The 4 Levels of Learning

Schooling is the lowest level, the institutional layer. Its primary output is a credential, a signal that you've passed this level and are eligible for the next. Schooling rewards conformity over curiosity. It measures compliance with institutional requirements. It can be navigated strategically or poorly, but it can't be cheated in the deepest sense: you either understand its rules and play by them, or you don't. Schooling is not worthless. But schooling and learning are not the same thing, and knowing which one you're doing at any given moment matters more than almost anything I can tell you.

Training is the purposeful acquisition of specific skills for specific ends. You learn to write code that actually runs. You learn to perform a medical procedure. You learn to read a financial statement. Training is practical, relatively unambiguous; you either acquire the capability, or you don't, and the test is whether you can apply it in the real world. Training is largely uncontroversial, and AI has made it faster and more accessible than it has ever been. That part of the AI story is mostly good news.

Education, in the classical sense, comes from the Latin educare, to lead out, to draw forth from within. Education describes what happens when a mentor, a challenging idea, an extraordinary teacher, a book you weren't ready for, or a conversation that unsettled something, helps you think at a level you couldn't reach alone. Not just knowing more things, but developing judgment. Not just accumulating facts, but learning to interrogate them, connect them, question them, and live with uncertainty about them. Education in this sense is relatively rare in formal schooling, though it's not absent. When it happens, it tends to happen in the margins, in one remarkable class, in a relationship with one particular teacher, in a project that somehow captured your genuine interest.

Self-directed learning is where this is all headed. It's the destination that genuine education is trying to build toward: a person who has learned how to learn. Someone with actual curiosity, not performed curiosity, not the interest you fake to satisfy the requirement, but the kind that wakes you up at two in the morning because a question got under your skin. Someone who sets their own problems, pursues their own answers, evaluates their own progress, and doesn't need external scoring to know whether they're growing. Self-directed learning is what makes you capable across a lifetime of changing circumstances, not just in the specific context where you were trained or credentialed.

These four levels exist in a hierarchy. School operates primarily at the first. Its institutional structure, its incentives, its measurement systems, and its daily rhythms are all organized around schooling: sorting, compliance, and credentialing. The system uses the language of the upper levels constantly. Teachers say they're developing lifelong learners, fostering critical thinking, and building independent minds. Many of them genuinely mean it. But the structural logic of the institution, what it actually rewards, measures, and reinforces day to day, operates at the bottom of the hierarchy.

This is not a reason to check out. It's the thing you need to see before you can make a real decision about your education.

* * *

The reason it's so hard to see is something called the Noble Lie. 

Plato introduced the concept: a functional fiction told to the citizens of a society, a shared story to make life smoother. He has Socrates imagine a story that will help students understand that they are all born from the earth but of different metals (gold, silver, or iron) and, because of that, are only capable of certain roles in the social order. The Noble Lie of modern schooling is not complicated: academic achievement is a fair and honest measure of your intelligence, your capability, and your future potential. Work hard, perform well, and the rewards follow. The scores reflect you.

Some version of this is almost certainly what you've been told your entire life. And here's what makes it so durable: the people who told you believed it. Your teachers, your parents, most of the people who designed and sustain this system, they are not lying to you maliciously. They are passing on a story they've absorbed, a story that sometimes really is true, and a story that the institution depends on to maintain its legitimacy. The most powerful fictions are the ones told by people who believe them. They're much harder to see through because the teller's sincerity is real, even when the story is partial.

The Noble Lie obscures something important: the system doesn't only sort by intelligence or effort. It sorts by prior access. Students whose families have books in the house, a quiet space to study, parents who themselves went through the system and can explain how it works, and students who arrive at school already knowing something of the implicit culture have a structural advantage that has nothing to do with their native capability. The system doesn't adjust for that. It scores the output and calls the score fair. Then, when a student doesn't produce the expected output, the story tells them to look inward.

I'm not asking you to be bitter about this. Bitterness is a response to being wronged, and the system didn't set out to wrong you. I'm asking you to see it. Seeing it is the beginning of having a real relationship with your own education, one where you decide what matters and why, rather than outsourcing that to an institution that has its own reasons for its scoring system, reasons that may have very little to do with your actual development as a person.

* * *

Now we're at the place I want to pause.

Knowing the game is a game doesn't mean opting out of it. That's a romantically tempting conclusion and maybe a bad one for most people. The credentials are real. The doors they open are real. The cost of ignoring the game entirely is often paid in lost options, and lost options have a way of narrowing your future choices in ways you can't fully see in advance.

What it means is that you now have a choice you didn't have before.

You can play the game strategically, learn its rules, meet its requirements, collect the credentials that open the doors you want, and simultaneously do something the game can neither give you nor take away. You can be a student who satisfies the institution's requirements while also becoming genuinely educated in the full sense of that word: someone developing real judgment, real curiosity, real capabilities that go far deeper than any credential and will outlast any institutional context.

These two things are not opposites. The students who thrive in the long run, not just during school, not just in the early years of work when the game's rules are still familiar, but across a lifetime of changing circumstances and unexpected challenges, are almost always the ones who understood, consciously or intuitively, that the game was a game. They played it well enough to keep their options open. And they didn't stop there.

What the deeper game requires is something the institution cannot supply. It requires an internal compass, a sense of direction that doesn't depend on external scoring to tell you whether you're genuinely growing. Not grades, not approval, not the satisfaction of hitting a rubric. Something more durable, more personal, and entirely yours.

That compass is what this essay is about. But before we get to it, we need to understand one more piece of the picture: why so many capable people stay trapped in the game's logic far longer than they should. Why do good students keep playing by rules that don't serve them, even when they could see the game for what it is if they looked?

The answer has to do with how institutions teach obedience, not by commanding it but by rewarding it in ways that are very hard to notice until you've stepped back far enough to see the pattern.

That's where we go next.

Why You Obey

There's a course no school puts in its catalog. It has no syllabus, no official learning objectives, and no unit tests. But it runs continuously alongside every other subject from the first day of kindergarten to the last day of senior year, and most students complete it with far higher marks than anything on their transcript. The course is: how to function inside an institution that requires your compliance.

The lessons are practical, and they work. Sit when sitting is expected. Speak when called on, not before. Produce what the assignment asks for, in the format the assignment specifies, by the deadline the assignment sets. Signal engagement, whether or not you feel it. Don't ask questions that slow the class down. Don't finish so fast that others feel inadequate. Don't fall so far behind that you become a problem. Locate the center and stay near it. The center is safe.

No one teaches these lessons explicitly. They don't have to. They're embedded in the reward structure. What gets praised, what gets ignored, what gets punished: these signals are constant, cumulative, and exquisitely clear to anyone paying attention. Students pay attention. They're very good at it. Long before they can articulate what they've learned, they've already absorbed it: the institution has preferences, and your life inside it is easier when you match them.

This is what theorists call the hidden curriculum. Not the official curriculum, not algebra or history or the water cycle, but the implicit curriculum running underneath it, teaching students something the institution needs them to know but would never say out loud: how to be compliant. How to be manageable. How to subordinate your own timing, your own questions, your own judgment, your own pace, to the requirements of a system that cannot accommodate the full range of who you actually are.

I want to be careful here, because this is the point where it's easy to veer into simple resentment toward teachers, schools, and the adults in your life. That's not what I'm after. Most of the people who run this system, who work inside it day after day, are not trying to produce compliant people. They genuinely want to help students grow. The hidden curriculum isn't a conspiracy. It's an emergent property, something no one designed but that inevitably arises when you put enough people, requirements, and schedules into the same building. Any institution large enough to require coordination produces pressure toward conformity. It's not malicious. It's structural. The institution needs you to be predictable to function, so, without anyone deciding to do so, it quietly trains you to be predictable.

The problem is not that the institution is evil. The problem is what the training does to you.

* * *

Think about what you've learned to optimize for.

Not what you've been told to care about, but what the actual reward structure, day in and day out, has shaped you to want. Grades. Approval. The absence of criticism. The relief of meeting a deadline. The small satisfaction of being called on and getting it right. The anxiety that comes from not knowing whether your answer is going to land.

That anxiety is worth sitting with for a moment. Where does it come from?

It comes from a system that has, for most of your life, attached your sense of adequacy to external evaluation. You produced something, an essay, a test answer, a presentation, and then you waited for someone else to tell you what it was worth. The score arrived, and you absorbed it. High scores felt like confirmation of your value. Low scores felt like evidence of your inadequacy. After thousands of repetitions of this cycle, the pattern runs deep. The self-esteem has become conditional, provisional on continued external approval, in ways that most students don't fully notice because it happened so gradually, from such an early age, that it feels like just how things are.

It's not how things are. It's how things were arranged.

What you were born with, what every young child has in abundance before the institution gets to work, is intrinsic motivation. Curiosity that doesn't need a grade to justify it. Effort that doesn't require a reward to sustain it. A drive to understand things, to master things, to figure out how the world works, that is entirely self-generated. Watch a three-year-old encounter something unfamiliar. The investigation is relentless and entirely unprompted. Nobody is giving them a score. Nobody has assigned them the task. They are learning because learning, in the natural human state, feels good. It is, in the deepest sense, what minds are for.

The institution didn't set out to extinguish this. But extinguishing it is a predictable side effect of replacing intrinsic motivation with external evaluation over a period of years. When the score is always waiting, the question shifts from “what do I actually want to understand?” to “what do I need to produce to get the score?” These are different questions. They produce different orientations. The first produces genuine learning. The second produces strategic performance. Both can coexist, but in a system that rewards performance and has no reliable way to measure genuine understanding, performance tends to crowd learning out.

* * *

Here is where it gets specific to you, in this moment.

The habits of mind the system has trained--wait for the instructions, produce what's asked for, check whether it's right with someone who knows--are exactly the habits that make AI the most convenient thing that has ever happened to students who are playing the game of school.

Think about what AI offers if you're optimizing for output rather than capability: unlimited patience with your questions, no judgment, instant responses, and an extraordinary ability to produce the kind of work that satisfies institutional requirements. Essays that meet rubrics. Summaries that hit the key points. Explanations that cover the material. It can do these things faster than you can, at a quality level that's often good enough to clear the bar the institution has set, without any of the friction, difficulty, confusion, or productive struggle that learning actually requires.

If you've been trained to optimize for the output, AI is an almost irresistible acceleration. Why wouldn't you use it? The game rewards the essay, not the thinking that produced the essay. The system can't see the difference. Use the tool, get the output, pass the level.

The institution, for its part, largely cannot detect this. It can detect cheating, the wholesale copying of someone else's prior work, because it can run a comparison. What it cannot detect is whether the work you submitted reflects your genuine thinking or whether it substitutes for it. A well-prompted AI can produce a competent essay on almost any topic that assigned essays touch on. The rubric measures the essay. Nobody is measuring what happened in your mind while the essay was being produced, or whether anything happened at all. The system was designed around a world where the output and the learning were hard to separate. They're no longer hard to separate. And the institution has not caught up with that.

I'm not telling you this to argue that using AI on assignments is fine. I'm telling you because the logic that makes it feel fine is the logic the institution trained into you, and you need to see that logic before you can evaluate it clearly. The hidden curriculum taught you to optimize for outputs. AI is an output machine. Of course, they fit together. The question is whether fitting together serves you.

* * *

The honest answer is: it depends entirely on what you're actually trying to accomplish.

If what you're trying to accomplish is to collect credentials while doing as little genuine cognitive work as possible, if the game is all you're playing, then AI will serve that goal extraordinarily well in the short term. I'm not going to pretend otherwise. It will also be quietly, progressively catastrophic for the thing the game is supposed to be preparing you for: a life in which the credentials eventually stop mattering, and all that's left is what you're actually capable of.

The compliance training provided by the institution has a lifespan. It serves you while you're inside the institution. It is well-designed for exactly that context: a world where external authority is constant, where someone always tells you what to do and evaluates whether you did it, where the right answer is findable if you just work the system correctly.

That world ends. Maybe not as soon as you'd like; institutions extend their logic into the workplace and keep you in familiar patterns for a while. But eventually, the scaffolding comes down. Eventually, the question becomes not “did you satisfy the requirement?” but “can you actually do this?” And in that moment, the gap between what the credential said and what you actually developed has consequences.

I've watched this unfold in too many conversations with too many people to think it's rare. Smart people who performed excellently in school, who collected all the right credentials, who optimized the game with genuine skill, and who then found themselves, somewhere in their late twenties or thirties, uncertain of their own judgment, dependent on external direction, vaguely aware that they'd spent a lot of years learning how to satisfy other people's requirements and not very much time learning to trust their own minds. The compliance worked. That's exactly the problem.

* * *

The compliance was trained. That means it can be noticed, examined, and, if you choose, set aside.

Not recklessly. Not by abandoning the institution entirely in a romantic gesture that costs you options you'll want later. But consciously. With clear eyes about what the game rewards and what it misses. With a real question underneath the institutional requirements: not just “what do I need to produce?” but “what am I actually becoming?”

That second question is the one the institution has no mechanism for. It can't score it, can't enforce it, can't design a rubric for it. It's yours entirely, which is exactly why it matters more than anything the institution can measure.

The next question is: what do you actually want to become? Not what the system wants to produce, not what the credential requires, not what will look good in whatever comes next. What you, specifically, at this specific point in your life, are trying to develop in yourself. That question requires a framework for thinking about learning that goes a lot deeper than grades. It requires knowing what conditions make real growth possible, and how to create them, including in your relationship with AI.

That's what comes next.

What Actually Matters

Let me ask you something nobody in school has probably asked you directly.

Think of a time when you actually learned something. Not performed something, not memorized something long enough to pass a test, and then let it go, but genuinely learned it. Something that stuck, something that changed how you saw or understood or could do something in the world. It doesn't have to be academic. It could be a skill, an insight, a piece of understanding you arrived at through experience or obsession, or someone who took the time to help you see something you couldn't see alone.

Got one? Now ask yourself: what made that possible?

I've put this question to educators for years in workshops and webinars. Different audiences, different backgrounds, different countries. The list that comes back is remarkably consistent. Someone believed in me. Someone challenged me to do something I didn't think I could do. I was genuinely curious about it; I wanted to understand it for my own reasons. I had room to fail, to try again, to figure it out at my own pace. Someone pushed back on what I thought I knew. The conditions that produced real learning, recalled honestly from personal experience, almost never include a rubric, a grade, a standardized test, or a fixed deadline. They almost always include relationship, challenge, genuine interest, and enough safety to actually try something difficult.

This is not a coincidence. These conditions, the things that reliably produce genuine learning when they're present and reliably prevent it when they're absent, are as close as we get to laws in education. They're not mysterious. They're not unique to gifted students or exceptional teachers. They're reproducible. And they have almost nothing to do with the institutional machinery that surrounds them.

* * *

Call them the Conditions of Learning. The list isn't complicated, but each item on it is doing real work.

Curiosity. Not performed interest, not strategic engagement with material because it will be on the test, but a genuine wanting to know. Curiosity is what drives learning after the class ends, after the grade is posted, after the requirement disappears. It's also what makes the difficult parts of learning bearable; when you actually want to understand something, the friction of figuring it out feels like progress rather than punishment.

Productive struggle. This one is counterintuitive, because school has mostly trained you to experience struggle as a sign that something is wrong. But struggle, the right kind, at the right level, on something that actually matters to you, is not a sign that you're failing. It's the mechanism by which capability is built. Your brain does not develop through ease. It develops through encountering problems it cannot immediately solve and working through them anyway. Remove the struggle, and you don't make learning more efficient. You make it impossible.

Reflection. The experience of doing something is not the same as learning from it. Reflection is the process that converts experience into understanding, the step where you ask what actually happened, what you now see that you didn't see before, and what you'd do differently. Without it, even rich and challenging experiences leave surprisingly little trace.

Autonomy. The sense that you are directing your own learning, making genuine choices, pursuing something because you chose to pursue it. This is one of the most powerful predictors of whether learning will stick and go deep. A student who is learning something because they want to is in a fundamentally different position than one who is learning it because they have to. The material might be identical. The outcomes rarely are.

Safety to fail. Real learning requires attempts that don't succeed. It requires guesses that turn out to be wrong, approaches that don't work, drafts that need to be discarded. A context where failure is genuinely costly, where a wrong answer has immediate social or institutional consequences, produces risk aversion, and risk aversion produces the minimum viable attempt rather than the genuine one. You don't take real intellectual risks when the cost of being wrong is too high.

Genuine feedback. Not a grade; a grade tells you how you ranked. Feedback tells you something specific about your thinking, your work, your understanding, in a way you can actually use to improve. It requires another mind engaged with yours. It is, when it happens, one of the most powerful accelerants of learning.

These conditions are the soil. Learning is the harvest. You can try to grow without the soil, and sometimes something will take root through sheer persistence, but not reliably, not deeply, not in ways that last. When these conditions are present together, deep learning becomes nearly inevitable. When they're absent, the most sophisticated instruction in the world produces very little.

* * *

Here is what the institution does with this.

Schooling, at its structural level, is largely indifferent to the Conditions of Learning. Not hostile, indifferent. The system isn't organized around curiosity, or productive struggle, or autonomy. It's organized around coverage, compliance, and assessment. It has to be: there are twenty-five students in the room, a curriculum to get through, a standardized test in spring, and an institution that needs to document outcomes. In that context, the conditions that produce genuine learning are often inconvenient. Curiosity takes you off the lesson plan. Productive struggle is slow. Autonomy is hard to assess. Maintaining safety to fail is difficult when grades are the primary feedback mechanism.

Not everything worthwhile can be measured, and not everything that can be measured is worthwhile. When we can't measure what is most valuable, human nature is to give the most value to whatever is measurable.

So the system substitutes. It substitutes coverage for curiosity. It substitutes completion for struggle. It substitutes grades for genuine feedback. And it moves everyone through at the same pace regardless of where any individual student actually is in their understanding, because the institution's logic requires it.

What this means for you, practically, is that the Conditions of Learning are mostly something you have to create for yourself. Some teachers will create them for you; I've met extraordinary ones who do it almost instinctively, who seem uniquely able to generate genuine curiosity in their students. But you cannot count on them. You cannot wait for the institution to hand you the conditions it is structurally unable to reliably provide. If you want to actually learn, not perform learning, not credential learning, but genuinely develop yourself, you need to understand what those conditions are and start taking some responsibility for creating them in your own life.

This is a bigger shift than it sounds. The institution has trained you to be a consumer of learning: show up, receive the material, produce the required output, and collect the score. What I'm describing is becoming a producer of your own learning: understanding what you need to grow, seeking it out, and creating it where it doesn't exist. That's a different relationship with education entirely. It's also, as it turns out, the one that actually works over a lifetime.

* * *

Now bring AI into this, and the stakes of everything I've just said get very high very fast.

AI is the most responsive, patient, and knowledgeable tool that has ever been available to a curious person. If you have a genuine question, not an assignment to complete but something you actually want to understand, and you bring it to a good AI interaction, you can go as deep into that question as your curiosity will carry you. You can ask follow-up questions. You can push back on answers that don't satisfy you. You can ask for a different explanation, a simpler one, a more technical one, one that approaches the question from a completely different angle. The barriers that used to limit self-directed learning, geography, cost, access to experts, and library hours have largely collapsed. For a person who understands the Conditions of Learning and is actively trying to create them, AI is a historic breakthrough. I mean that without exaggeration.

But AI is also the most frictionless shortcut to bypassing those same conditions that has ever existed.

Ask it to write the essay, and you've eliminated productive struggle. Ask it to summarize the chapter, and you've eliminated the slow reading that builds genuine understanding. Ask it to generate the argument, and you've eliminated the reflection required to develop your own. Ask it to answer the question before you've had a chance to sit with the question, and you've eliminated the curiosity, the wondering, that drives real inquiry. The machine will do all of this happily, immediately, without any indication that something has gone wrong. It has no stake in your development. It has no way of knowing whether its output is serving your growth or substituting for it. It will give you exactly what you ask for, which is precisely the problem when what you're asking for is an escape from the conditions that actually make you smarter.

There's a term for what happens at the far end of this pattern: cognitive surrender. Not just the atrophy of a skill, the gradual weakening of something you stop using, but something deeper and harder to recover from. Cognitive surrender is what happens when you stop wanting to think for yourself. When the question “why struggle with this when the machine can do it?” stops feeling like a temptation and starts feeling like common sense. When the delegation of your thinking becomes so complete and so habitual that the desire to engage your own mind, the curiosity, the productive struggle, the willingness to sit with a hard question, has quietly left the building.

It presents itself as efficiency. It is, in practice, the slow erosion of the very thing your education is supposed to be building.

* * *

The Conditions of Learning give you a way to evaluate any AI interaction in real time, without needing a policy, a rule, or someone looking over your shoulder.

The question is simple: Does this use of AI create or undermine the conditions that produce genuine learning in me?

Is it amplifying my curiosity or replacing it? Is it helping me work through the difficulty, or eliminating it entirely? Is it giving me something to push back against, to test my thinking against, to refine my understanding against, or is it just handing me an answer I'll accept and move on from? Is it helping me develop a capability I'll actually have afterward, or is it producing an output I'll submit and forget?

These questions don't have the same answer every time. AI used as a thinking partner, something to interrogate, argue with, explore with, and use as a first draft of your own thinking rather than a replacement for it, can genuinely enhance the conditions for your learning. AI used as an answer machine, a shortcut past the friction, a way to satisfy the requirement with the minimum expenditure of your own mind, systematically destroys them.

The same tool. Completely different outcomes. The difference is not the technology. It's what you're trying to accomplish when you reach for it.

That question, what am I actually trying to accomplish, is the one we need to get serious about now. Because answering it honestly requires knowing something about yourself that school has largely not helped you develop: a genuine sense of direction. A real understanding of what you're trying to become, not just what you're trying to get.

That's the compass. And it's what the next section is about.

The AI Choice

Every powerful tool in human history has carried the same double nature. It extends what you can do, and it atrophies what it does for you, if you let it.

Socrates worried about writing. This is not a joke or a piece of historical trivia; he argued it in earnest, in Plato's Phaedrus, that the written word would weaken human memory. That people would store knowledge outside themselves and lose the internal capacity to hold and reason with it. He was not entirely wrong. Writing did change how humans store and retrieve knowledge. But the net effect was not diminishment; it was an explosion of human capability, because people learned to use writing as a tool that extended their thinking rather than replaced it.

The calculator produced the same anxiety in a later generation. If students can just punch numbers into a machine, will they ever learn to reason mathematically? Some didn't. The students who used calculators as a substitute for understanding arithmetic, rather than as a tool in the hands of someone who already understood it, ended up with neither the skill nor the understanding. But the students who learned the mathematics and then used calculators to free themselves from tedious arithmetic so they could do more mathematics, they came out ahead. The tool was the same. The outcomes diverged entirely based on what the person brought to it.

This pattern is old enough to be something like a law. Every cognitive tool creates leverage and atrophy risk simultaneously. The leverage is real. The atrophy risk is real. And the outcome is not determined by the tool; it's determined by the person using it, specifically whether that person is using it to extend their capability or replace it.

AI is the most powerful instantiation of this pattern in human history. The leverage it offers is extraordinary, genuinely, historically unprecedented. A curious person with access to a good AI interaction can now go deeper into almost any subject than most people could have managed a decade ago, without a university library, without expensive tutors, without institutional gatekeeping of any kind. That part of the story is real. I don't want to bury it under warnings.

But the risk of atrophy is equally extraordinary. And what makes this particular moment different from the calculator or the search engine is that AI doesn't just perform a narrow task, arithmetic and retrieval; it performs the thinking itself. It generates arguments, makes judgments, synthesizes information, and produces the kind of output that used to require a mind actively engaged with a problem. Which means the atrophy risk isn't limited to a specific skill. It extends to the whole enterprise of thinking.

* * *

Let me give you two concepts that are worth keeping for the rest of your life, because the difference between them is the difference between AI making you more capable and AI making you less.

The first is cognitive offloading. This is what a mathematician does when she uses a calculator for routine arithmetic. She understands the mathematics. She could do the calculation by hand if she had to. She's made a conscious decision to delegate a specific, mechanical task to a tool so she can spend her mental energy on the parts of the problem the calculator can't touch. The capability is intact. The judgment about what to delegate is intact. The tool is serving a capable person who chose to use it.

The second is cognitive surrender. This is what happens when a student never develops the underlying capability because the tool has always been there. Not a delegation, but an abdication. Not a choice made by a capable person, but the permanent absence of a capability that was never built in the first place, or was built and then so consistently bypassed that it quietly stopped working. The student can't do the mathematics. They couldn't do it before the calculator, and they can't do it now. The tool didn't extend their capability. It substituted for it.

The distinction sounds clean when you lay it out this way. In practice, it's harder to see, because cognitive surrender doesn't arrive all at once, and it doesn't announce itself. It comes gradually, interaction by interaction, each one feeling like a perfectly reasonable decision. Why formulate this argument myself when the AI can produce a better-organized one in ten seconds? Why sit with this confusion when I can just ask and get clarity immediately? Why develop my own interpretation when I can read the AI's and decide whether I agree? Each of these feels, in the moment, like efficiency. Sensible. Modern. Like using the tools available to you rather than performing unnecessary difficulty.

What actually happens, over time, is that the expectation of effort shifts. The experience of productive struggle, which used to feel normal, even satisfying when you broke through, starts to feel unnecessary. Then it starts to feel annoying. Then it stops occurring to you that it was ever available. You are not, at that point, a person who has delegated a task to a tool. You are a person who has stopped wanting to think for yourself. That is a different condition, and it is much harder to recover from.

* * *

Three things in the current moment make cognitive surrender especially easy to slide into, and you should know what they are because none of them are going to warn you.

The first is that the companies building these tools have no incentive to prevent it. The business model of every major AI platform runs on engagement and dependency. A user who delegates more to the tool is a more engaged user. A user who becomes dependent on the tool is a retained user. There is no commercial pressure, none whatsoever, for an AI company to help you become less reliant on its product. That's not malice. It's the ordinary operation of incentive structures. The tool is designed to be used more, not less. It is designed to feel indispensable. It will succeed at this unless you are deliberately working against it.

The second is that the system around you cannot detect surrender; it can only detect cheating. A school can run your essay through a detection tool and find evidence that text was copied. What it cannot find, what it has no mechanism for finding, is whether the work you submitted reflects genuine engagement of your own mind or a sophisticated bypass of it. A well-prompted AI can produce an essay that satisfies most rubrics on most assigned topics. The grade goes into the system. No flag is raised. You've beaten the detection. You've also quietly given away something the system was supposed to be building in you, and the system can't see it because it never had a good way to measure what was most important in the first place. Recall what I said in the last section: not everything worthwhile can be measured, and the system has optimized for what it can measure. Your genuine cognitive development is not in that category.

The third is that surrender is self-reinforcing in a genuinely insidious way. Each act of delegation makes the next one easier. Not because the skill atrophies overnight; it doesn't. It's because the expectation shifts. The student who asks AI to write their first essay finds the second one harder to write themselves, not because they've lost the technical ability, but because the experience of sitting with a blank page and generating something from their own mind now feels like unnecessary friction. The third essay is harder still. By the tenth, the question “why would I do this myself?” feels like common sense rather than a warning sign. The trajectory of cognitive surrender is not from competence to incompetence. It is from agency to passivity. From someone who thinks to someone who receives. And it happens quietly enough that many people don't notice until the conditions of the game have changed, until the scaffolding comes down and no AI can substitute for the judgment they didn't develop.

* * *

None of this means don't use AI. I want to be as clear about that as I can, because this kind of argument is often read as technophobia, and it isn't. It's the opposite. It's an argument for using AI with enough understanding of what's at stake that you can actually capture the leverage rather than suffer atrophy.

The question that cuts through all the noise, for any specific AI interaction at any moment, is this: Does this use of AI serve the capable, self-directed adult I am becoming?

Not: Is this allowed? Not: Will I get caught? Not: Is this technically cheating? Those are the wrong questions, and they're the questions the institution trained you to ask because the institution's logic is about rules and compliance. The right question is forward-looking and personal. It requires you to have some sense of who you're trying to become, and to evaluate this specific interaction against that standard.

I've called this the Amish Test, after something the writer Kevin Kelly documented about Amish communities. The Amish are not categorically anti-technology; that's a common misunderstanding. What they do is evaluate technology deliberately, asking whether a given tool serves their values and their long-term vision of how they want to live. They adopt what serves those goals. They decline what doesn't. They are, in this sense, more intentional about technology than almost anyone in the modern world, not because they're afraid of it, but because they've decided that the adoption of any tool is a choice that should be made consciously rather than by default.

The question they ask, applied to your situation: Does this use of AI, right now, serve the person I am trying to become? Not AI in the abstract; this specific use, in this specific moment. Using AI to explore a question you're genuinely curious about, to push your thinking further than you could push it alone, to get a different angle on a problem you've already engaged with; that use serves the capable, self-directed adult you're becoming. It's offloading, not surrender. Using AI to generate the essay you don't want to write on the topic you don't care about so you can move on to something else; that also serves a goal, but it's not the goal of your development. Know the difference. Make the choice explicitly, with your eyes open, rather than letting default decide.

* * *

Here's what that looks like in practice, across the spectrum of how AI actually gets used.

At one end, AI as a thinking partner. You've read something, struggled with it, formed a preliminary view. You bring it to an AI interaction not to be told what to think but to stress-test what you've already thought. You push back. You ask for the counterargument. You ask why the position you've formed might be wrong. You use the exchange to sharpen your own thinking, and what you walk away with is yours, a more developed version of your own reasoning, not a replacement for it. This is offloading at its most productive. The underlying capability is not just intact, it's stronger.

Further along the spectrum, AI as explainer. You're confused about something, genuinely stuck, and you ask for clarification. This is legitimate and often valuable; it's what a good teacher does, and access to a patient, knowledgeable explainer at any hour is one of the real gifts of this moment. The risk here is subtle but real: if you're always resolving the confusion before you've sat with it long enough to develop your own relationship to the question, you're short-circuiting something the confusion was producing. Confusion is not just an obstacle. It's often the signal that your brain is working on something. Eliminating it too quickly can leave the work undone.

Further still, AI as first draft. You use it to generate a starting point, then engage genuinely with what it produced, rewriting, pushing back, improving it against your own judgment of what should be there. This is a zone of genuine risk. If the engagement is real, if you're actually thinking harder because of what the AI produced, this can work. If the engagement is cursory, if the draft goes out largely as it came in, then the output was the AI's and the learning was close to zero.

At the far end, AI as surrogate. You hand it the task entirely, accept what comes back, and move on. The output satisfies the institutional requirement. Nothing that happened in this interaction made you more capable. This is what junk food is to nutrition: it satisfies the immediate hunger while providing none of what your mind actually needed from the experience. The assignment is done. The learning didn't happen. And unlike junk food, where the empty calories are at least visible in your waistline, this damage is entirely invisible: to the institution, to the people around you, and quite possibly to yourself.

Consider what you're actually spending here. If you're in college or university, you or your family is paying an enormous amount of money, tuition, room, board, and years of income deferred, for the stated purpose of developing your mind and your capabilities. If you're in high school, you're spending something equally irreplaceable: years of your life, hours every day, in an environment that is asking for your full attention and presence. Either way, the investment is real, and it is massive. Which makes it worth asking, with genuine seriousness: if you're using AI to bypass the actual development the investment was supposed to purchase, what exactly are you getting for it? A credential, maybe. A grade, certainly. But the thing the money and the time were nominally for, the growth, the capability, the developed mind, that you gave away for free. That's not efficiency. That's a colossal waste dressed up as a shortcut.

The spectrum matters because almost nobody operates at one pure end. Most real AI use is somewhere in the middle, which is exactly why the question "does this serve the person I'm becoming?" needs to be a living one, asked regularly, and answered honestly.

* * *

You are living at a moment when this question matters more than it ever has before, and when the forces pushing you toward the wrong answer are more powerful than they've ever been. The tool is extraordinary. The incentives around it are misaligned with your development. The institution around you can't detect the problem. And the pattern of compliance the system trained into you makes the shortcut feel natural.

None of those forces is going away. The only thing that changes the outcome is a person who understands what's at stake and has decided, consciously, explicitly, for their own reasons, that their cognitive agency is worth protecting.

That decision requires knowing what you're protecting it for. It requires having something you actually care about becoming, a direction that belongs to you rather than to the institution, a compass that works even when no one is grading you.

Building that compass is what we do next.

Your Compass

Everything I've described so far is a diagnosis. The game, the hidden curriculum, the trained compliance, the conditions that actually produce learning, the choice AI is forcing you to make, all of it is an attempt to help you see clearly what's actually happening in and around your education. Diagnosis matters. You can't navigate well from a map you don't trust.

But diagnosis is not a destination. And at some point, ideally now and not in ten years when the costs have compounded, the question shifts from “what is this system doing?” to “what am I going to do?”

That question requires something the institution cannot give you, and AI cannot generate for you. It requires a compass. Not a set of rules handed down from outside, not a policy about appropriate AI use, not someone else's definition of what success looks like. A compass that is genuinely yours, grounded in your own sense of what you're trying to become, calibrated to your own values and curiosity and vision of your life. Something that works even when no one is grading you, even when the scaffolding of requirements and deadlines has fallen away, even when the choice in front of you is invisible to everyone but you.

This is harder to develop than it sounds, because the institution has spent years training you to navigate by external signals. Grades told you where you stood. Assignments told you what to do. Deadlines told you when. Approval told you whether you'd done it right. Remove those signals, and many students, including very successful ones, find themselves genuinely uncertain about what direction is. Not because they lack intelligence or ambition, but because they've never been asked to generate direction from the inside.

That's what this section is about.

* * *

Start with a question that sounds simple and isn't.

Who do you want to be at thirty?

Not what job you want to have. Not what credential you want to hold or what income you want to earn; those are fine things to think about, but they're not the question. The question is about the person. What kind of thinker do you want to be? What qualities of mind do you want to have developed? What will you be able to do, understand, create, and navigate? What kind of judgment will you bring to hard situations? What will you know about yourself, about how you work, about what you value, and why?

Most young people have not been asked this question in any serious way. School asks what you want to do, not who you want to become. The difference matters enormously, because doing follows from being in ways that credential accumulation doesn't capture. The thirty-year-old you will face situations no institutional requirement prepared you for specifically. What will carry you through those situations is not the particular content of any course you took. It's the quality of your thinking, the depth of your judgment, the strength of your curiosity, the solidity of your sense of self. Those are developed, not issued. And how you develop them depends on the choices you make now, including and especially your choices about AI.

The thirty-year-old question is not a fantasy exercise. It's a practical tool. It cuts through the noise of immediate pressures, this assignment, this grade, this deadline, this convenient shortcut, and forces attention onto the actual long-term goal. When you ask “does this use of AI serve the person I'm becoming?” you need to know something about who that person is. The thirty-year-old question is where that knowledge starts.

* * *

From that question, you can begin building what I'd call a Personal Education Plan, not the institutional kind, not the remediation document that schools create for struggling students without their meaningful input, but something genuinely yours. An internal map of your own education that exists independently of any external requirement.

It doesn't have to be elaborate. It doesn't require a formal document or a structured template. But it does require you to have honest answers to a handful of questions that the institution has never formally asked you.

What am I actually curious about? Not what I'm supposed to be interested in, not what looks good, not what my parents want or what the college application requires, but what genuinely captures my attention when I'm free to go in any direction? Curiosity is the most reliable engine of real learning. Following it is not self-indulgence. It is the most direct route to the kind of deep capability that schooling cannot produce, and AI cannot substitute for.

What kind of person am I trying to become? This is the thirty-year-old question applied directly. The qualities, the capabilities, the dispositions. The answer doesn't have to be fully formed; you're not supposed to have your whole life figured out at sixteen or nineteen or twenty-two. But having some genuine direction, even a provisional one, gives you a standard against which to evaluate your choices. Without it, you're navigating entirely by external signals, which is exactly the condition the institution trained you into.

What capabilities do I actually need? Not what the curriculum requires; what do I actually need, given who I'm trying to become and what I'm curious about? This question often reveals gaps the institution isn't covering and redundancies it's belaboring. It also gives you a basis for taking some courses seriously for your own reasons, even when the institutional framing doesn't do them justice.

How will I know I'm growing? This is perhaps the hardest question, because the institution has conditioned you to answer it with grades. But grades measure your performance in the game, not your genuine development. Real growth often doesn't show up in grades at all; it shows up in the quality of your thinking, in your ability to engage with complexity you couldn't handle before, in the solidity of your judgment, in the increasing sense that you can trust your own mind. Finding non-institutional signals of your own growth is one of the most important things you can do, because those signals are the ones that will continue to be available after the institution's signals go away.

How does AI serve this plan? Given everything you've built, your curiosity, your sense of direction, your understanding of the conditions that actually make you grow, how do you use AI in ways that accelerate rather than undermine it? This question doesn't have a permanent answer. It gets asked fresh at each decision point, each interaction, each moment when the shortcut is available, and you're choosing whether to take it.

* * *

These questions together constitute something more important than a plan. They constitute an identity as a learner, a genuine sense of yourself as someone who is actively directing your own education, rather than someone to whom education is being done. That shift, from passive recipient to active agent, is the most significant move available to any student at any level, and it's a move the institution will not make for you. It requires you to explicitly decide that your development belongs to you.

I've used the phrase agentic learning for this, partly because it's precise and partly because the word agentic is everywhere right now in discussions about AI; agentic AI systems are those that don't just respond to prompts but pursue goals, make plans, and take sequential actions toward objectives. The parallel is deliberate. An agentic learner is not someone who waits for the assignment and completes it. They're someone with genuine goals, genuine plans, and genuine ownership of the direction of their own education. The contrast with the passivity the institution trains is as sharp as the contrast between AI that executes instructions and AI that pursues goals. You want to be the second kind of learner. Passive execution of institutional requirements will not develop you the way active pursuit of genuine goals will.

* * *

Now let me tell you what this looks like in relationship with AI specifically, because the compass doesn't exist in the abstract; it gets tested in real decisions, and most of those decisions happen quickly and invisibly.

A student with a genuine internal compass brings a different orientation to every AI interaction. They're not asking, "How do I use this to satisfy the requirement?" They're asking: "How do I use this in a way that serves where I'm actually trying to go?" Those questions lead to very different behavior with the same tool.

A student with a compass uses AI to go deeper into things they're already curious about, not to bypass things they're not. They use it to generate a counterargument to the position they've already formed, not to generate the position itself. They use it to clarify confusion after they've sat with the confusion long enough to understand what they're actually confused about. They use it to explore a question further, not to close the question before they've really opened it. They treat it as a thinking partner with real limitations, a limited sense of what's actually true, no understanding of what they specifically need to develop, and no stake in their growth, rather than as an authority whose outputs can be trusted and submitted.

A student with a compass also knows when AI isn't what they need at all. When the assignment is hard in a way that's productive, when the struggle is the point, they recognize that reaching for AI to relieve the difficulty is exactly analogous to asking someone else to do your push-ups. The resistance is the mechanism. Remove it, and you've removed the thing that was supposed to build something.

None of this requires heroic self-denial. It doesn't mean refusing AI or performing difficulty to prove something. It means understanding the difference between what makes you look productive and what actually makes you capable, and caring enough about the second thing to make your choices accordingly.

* * *

I want to say something directly to the part of you that might be reading this and thinking: this sounds like a lot of work for outcomes I can't see yet, when the shortcut is right there and available, and most people around me are taking it.

That's a fair thought. And I'm not going to pretend the immediate calculus looks favorable for the approach I'm describing. The shortcut is faster. The game rewards the output. Most people around you probably are taking it. The institution can't tell the difference most of the time.

What I can tell you, from years of watching this play out, is that the gap between the two paths is not visible at the beginning and becomes very visible at the end. The students who treated their education as a game to be optimized and their development as secondary tend to arrive in their mid-twenties and beyond with credentials but without the capabilities those credentials imply. They've won the game. They're genuinely uncertain what to do now that the game is over. The students who took the longer view, who understood the game but refused to let it be their only game, who kept some part of their education genuinely theirs, those students arrive in the same place with something the credential can't capture and can't be taken away: the developed capacity to think for themselves.

That capacity is the compass. Not a fixed set of answers; a durable ability to generate direction from the inside. And it is built, or not built, in the hours and choices that feel invisible at the time.

The last thing I want to do is leave you with a framework and no sense of what it's actually preparing you for. So let's end there, with what comes after the game, and why the choices you make now matter more than the institution's scoring system will ever be able to show you.

What You're Really Preparing For

Here is something worth knowing before you leave school: the game doesn't end there.

The institution changes its name and its setting. The grades become performance reviews. The GPA becomes the job title you've advanced to. The teacher's approval becomes the manager's approval. The assignments become deliverables. But the underlying logic--produce what the system requires, signal what the evaluators want to see, stay near the center, don't ask questions that make things complicated--that logic follows you. The Game of School becomes the Game of Work, and most people step into it without noticing the transition because the rules feel so familiar. They've been practicing for this their whole lives without knowing that's what they were doing.

I'm not telling you this to be bleak about what's ahead. I'm telling you because the compliance trained into you by school doesn't stop being trained into you just because you walk across a stage and collect a piece of paper. It continues operating in the background, shaping your responses, your expectations, your sense of what's normal, until something interrupts it. Sometimes the interruption is a crisis. Sometimes it's a mentor who tells you the truth about what you're capable of. Sometimes it's a book that lands at exactly the right moment. Sometimes it's the slow accumulation of your own experience, the gradual recognition that you've been playing by rules that don't serve you.

The students who arrive at that recognition early, who develop a genuine internal compass before the Game of Work has fully absorbed them, are in a categorically different position from the ones who don't. Not because life is easier for them, or because they've escaped the necessity of working within institutions. They haven't. But they bring a different quality of self to every institutional context they enter. They know the game is a game. They can play it strategically, without being consumed by it. And underneath the game, they have something developing that the game can never fully reach: their own capacity to think, judge, decide, and direct.

* * *

Now add AI to this picture, and the stakes multiply in ways that I don't think most people have fully absorbed yet.

The working world you are entering is one in which AI can perform an increasing share of the tasks that jobs have historically required. Not all tasks, not the judgment calls, the relationship navigation, the creative leaps, the ability to understand what a situation actually requires rather than what it appears to require. But a growing portion of the routine cognitive work that institutions pay people to do. The people most vulnerable to this shift are, almost exactly, the people most thoroughly trained by the Game of School: those who learned to execute instructions reliably, produce required outputs efficiently, and stay within defined parameters. Those are the capabilities AI replicates most readily. The compliance the institution rewarded is precisely what becomes most substitutable.

What AI cannot replicate, what remains stubbornly, essentially human, is genuine judgment. The ability to look at a situation that doesn't fit the template and understand what it actually requires. The ability to ask the right question when the question hasn't been given to you. The ability to navigate ambiguity, sit with uncertainty, and make a decision you can stand behind when the outcome is genuinely unclear. The ability to care about something for your own reasons, to pursue it with your own motivation, to see it through when external pressure isn't driving you. These capabilities are not produced by credential accumulation. They are not produced by AI interaction. They are produced, slowly, unevenly, through effort and reflection and genuine engagement with difficulty, by exactly the process this essay has been describing.

The students who develop genuine cognitive agency now, who take the compass seriously, who use AI to become more capable rather than less, who protect their ability to think for themselves even when the shortcut is available, and the institution can't tell the difference, those students are preparing for something the credential cannot capture and the Game of School cannot produce. They are preparing to be the kind of person who remains valuable and capable in a world that is getting very good at replacing people who aren't.

* * *

I've spent decades in education. I've watched enormous numbers of students move through this system and into whatever came after it. I've interviewed teachers, reformers, researchers, and thinkers who have devoted their professional lives to understanding what education is actually for and why we so often fail to deliver it. And after all of that watching and listening and thinking, what I keep coming back to is something surprisingly simple.

The students who thrive, not just in school, not just in the early years of work when the game's rules are still familiar, but across a lifetime of changing circumstances and unexpected challenges, are the ones who learned to trust their own minds. Not blindly. Not arrogantly. But genuinely: with the earned confidence of someone who has done the work of developing their own thinking, tested it against real difficulty, refined it through genuine feedback, and arrived at something that belongs to them. They have a compass. They built it themselves. And it works in conditions for which no institutional credential was designed.

That's what I want for you. Not as an abstraction; as something you can actually start building now, in the middle of whatever institutional context you're currently in, with whatever relationship to AI you currently have.

You don't have to wait until you're free of the game to start playing a deeper one. You don't have to opt out of credentials to start caring about genuine capability. You don't have to refuse AI to avoid cognitive surrender. You just have to see clearly what the choices in front of you actually are, which is what this essay has been trying to help you do, and then make them explicitly, with your own development as the standard rather than the institution's scoring system.

* * *

The person you are at thirty will be built, in large part, from the choices you make in the hours that feel invisible right now. The assignments you actually think through versus the ones you hand off. The confusions you sit with long enough to understand versus the ones you resolve before they can teach you anything. The questions you follow because they genuinely interest you versus the ones you fake interest in because they're required. The capabilities you build because you decided they mattered versus the credentials you collected because the game required them.

None of this will show up in your GPA. Most of it won't show up in any external measure at all. It will show up in you, in the quality of your thinking, the solidity of your judgment, the depth of your curiosity, the durability of your sense of direction when the scaffolding eventually falls away. Those are the things that carry you. They are also, as it happens, exactly what this particular moment in history most needs from the people moving through it.

AI is not going to save education. It is not going to destroy it either. What it's going to do, what it is already doing, is make the distinction between genuine learning and its performance more consequential than it has ever been. The gap between a person who has developed real cognitive agency and a person who has learned to produce the appearance of it is about to become very visible, in very practical ways, in very real circumstances. The institution cannot show you that gap. The credential cannot measure it. Only you can know which side of it you're on.

I'm writing this because I believe you're capable of being on the right side of it. Not because you're exceptional, though you may be, but because the capacity for genuine self-direction is not a rare gift distributed to a lucky few. It's a human capacity, available to anyone who chooses to develop it, that the institution has largely failed to cultivate, and that AI, misused, will further suppress. You don't have to let either of those things determine your outcome. You have more agency in this than the system has ever told you.

Use it.