AI ruined homework. Good.
Now we can finally ask: What should school be for—and who’s supposed to answer that question?
Last June, I started a post wondering, “If jobs are obsolete, what is school for?” It’s been sitting unpublished for almost a year in digital chicken scratch. Messy, unpolished, incomplete. Honestly, it’s almost incoherent.
Then last week, two stories broke that made me dust this old thing off and take another look.
If jobs are obsolete, what is school for? We still want an educated society. And we as individuals still wanted to be “educated,” even if kids — and many adults — don’t want to be educated — or taught new things. It’s derogatory to say someone got “schooled.”
But what does it mean to have an educated society? Does that mean everyone knows the basics of everything? Does it have to? Could it mean as a society, we have all the basics covered together, but people are deeper experts in what interests them? What would that kind of schooling look like?
Teaching to the test has led us down a path to math and reading above all else, and the test scores haven’t turned out great. Many argue that the arts and humanities actually increase those other scores, but without the pressure to be ready for the test — and in the end, ready for “work” — can we accept that the arts and humanities matter anyway?
That’s all if jobs are obsolete in a good way. What if they’re obsolete for the bad? If AGI goes rogue, we can’t expect people educated in our current system to do anything about it. They’ve never learned how. What we should be teaching is how to think, how to adapt, how to listen to and learn from and convince others to move forward together. And… yeah, some basics on how the A.I. works.
Is teaching to a math test or assigning papers that the writers don’t care about going to get us there? I’m guessing not.
A big worry of the A.I.-in-education community is that A.I. will make things too easy, and learning will disappear.
“But the problem is that learning is often difficult, and only happens when we're backed into a corner and need to figure something out.”
I disagree.
Almost a full year later, I stand by those ramblings as these two articles pulled them out of hiding:
A New York Magazine opinion piece? expose? titled Everyone is Cheating Their Way through College.
A scientific study in the peer-reviewed journal Nature: The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis.
I don’t need to tell you which one went viral.
In case you don’t like reading scientific studies 🥱 here’s the first sentence of its conclusion: “This study used meta-analysis to analyze the impact of ChatGPT on student learning performance, learning perception, and higher-order thinking. With regard to learning performance, an analysis of data from 44 experimental and quasi-experimental studies showed that the calculated effect size indicates a large positive impact (g = 0.867) of ChatGPT on student learning performance.”
I don’t need to paste any conclusions from the New York Magazine piece for you to get the gist of that one.
How can one be so juicy and so alarming and one be so boring you can barely make your way through a single sentence, but actually… positive?
As Wharton Business School Professor and A.I. Expert Ethan Mollick explains, “Plenty of caveats, but a meta-analysis of all 51 experimental papers on the topic suggests ChatGPT helps learning when used appropriately.”
And his commenters jumped in.
“The implication seems clear: teaching how to use ChatGPT yields more than ignoring its presence.”
Can we expect higher ed to have pivoted to “teaching how to use ChatGPT” when its initial, knee-jerk reaction was to ban it altogether? If some schools have lifted those bans, can we expect that they’re doing anything other than “ignoring its presence?”
The handful of stories in New York Magazine suggest not.
Another of Mollick’s commenters, an MBA candidate at UC Irvine explained, “For the record, I have zero background in subjects like finance. So, AI definitely has helped me reinforce concepts, or figure out how to approach problems that I simply had no clue how to solve. I also come to this with an extremely intense liberal arts education, ergo, I have the capacity to know when I’m using AI to learn, or when I’m just ‘cognitively offloading.’”
He finished his comment with this: “Self-awareness of how you are using AI seems like an underrated key to using AI well.”
The students featured in the viral article certainly lack that.
“A philosophy professor…caught students in her Ethics and Technology class using AI to respond to the prompt ‘Briefly introduce yourself and say what you’re hoping to get out of this class.’”
🤦🏻♀️
So, who should be solving this problem? Who should be getting us from the “cognitive offloading” examples in the New York Magazine article to the “helps learning when used appropriately” conclusion in Nature?
Is it the students? Should we expect them to take this brand-new, nothing-to-compare-it-to tool and figure it how to use it ethically? What age group of students? Should we expect undergrads to do so? What about high schoolers? Middle school? Elementary?
Is it the teachers and professors? Should we expect an already overloaded profession to completely change everything they’ve ever assigned? Every idea they’ve ever had about what kids should learn in school? “Eight in 10 leaders [are] more likely to hire someone who is comfortable with AI tools than someone with more experience but less AI proficiency.”
Is it the companies hiring high school seniors or college grads? Should we expect them to be responsible stewards of the future of… everything? Should Day 1 on the job be all about learning how to use A.I. to do your job effectively? And also ethically? Seems like Day 1 isn’t enough. Seems like Year 1 isn’t enough.
Or is it the parents? Should parents be expected to keep up with the world of A.I.? To teach their children not to use A.I. so they can learn deeply, but also how to use it so they’re ready for the workforce?
___________________________________________________________________________
Well, I went ahead and asked ChatGPT. Here’s what it said:
Short version
All of us—but in different ways and at different depths.
Students need agency and metacognition;
Teachers need time, support, and permission to redesign learning;
Institutions need policy, professional‑development money, and credible assessment models;
Employers need to articulate (loudly) the AI fluencies they expect and reward;
Tech companies need to build transparent guard‑rails and open up their black boxes;
Parents need the language—and the patience—to keep AI talk on the dinner‑table menu;
Governments need to set minimum safety and equity standards so nobody gets left behind.
Below is the longer, “who‑does‑what” breakdown.
1. Students — Own the “why,” practice the “how”
Metacognition first. Your MBA commenter nailed it: knowing when you’re learning vs. off‑loading is a twenty‑first‑century life skill. Schools should make that self‑audit explicit: “Show your work and your prompts. Mark the parts the model wrote. Reflect on what you still don’t understand.”
Peer accountability. Students already police plagiarism better than proctors when the culture is right. Class‑made honor codes that mention AI by name feel less top‑down and stick better.
2. Educators — Design for thinking, not recitation
Assessment redesign. If an assignment can be done passably by ChatGPT alone, it’s an assessment problem, not an AI problem. Push tasks toward context, critique, personal stake, and local data the model can’t fake.
Prompt‑craft as literacy. Writing good prompts, checking model output against sources, and iterating is today’s equivalent of searching the stacks and citing properly. It belongs in the syllabus.
Professional learning days that aren’t joke days. Districts/universities need to budget real time (and stipends) so teachers can play, break things, and co‑write new rubrics.
3. Institutions (schools, universities, districts) — Create the runway
Clear, flexible policy. A single, public page that says “AI forbidden” on line one and “learn AI ethics” on line two is incoherent. Policies should:
Define acceptable use (e.g., brainstorming ok, final assessments no‑go unless disclosed).
Require disclosure logs (prompts + revisions) when AI is used.
Spell out consequences and appeals.
Tool access. Don’t block the tech and then test students on it post‑graduation. Provide vetted, district‑licensed chatbots with data‑privacy guarantees.
4. Employers — Signal loudly and early
Job posts should list concrete AI competencies. “Able to storyboard and refine prompts for text and image models” is clearer than “familiar with generative AI.”
Internships and apprenticeships that teach the Delta. Show newcomers the difference between naïve AI use (click‑print‑submit) and professional AI use (critique‑rewrite‑document). That feedback loop is something schools can’t simulate perfectly.
5. Tech companies — Build for pedagogy, not just productivity
Educational tiers with transparency features on by default. Think version history, prompt journaling, age‑appropriate safety rails, source‑trace buttons.
Partnerships with curriculum designers. Co‑create mini‑courseware that lives inside the product instead of a glossy PDF nobody reads.
6. Parents & caregivers — Keep the ethics conversation alive
Dinner‑table debriefs. “What did the bot help you with today? What did you still have to think through yourself?” normalizes reflection.
Model lifelong learning. If kids see parents taking a Coursera on prompt engineering or watching an AI‑ethics webinar, “school” stops being a place and starts being a process.
7. Governments & accreditation bodies — Guard‑rails and equity
Minimum AI‑literacy standards in K‑12 frameworks (as basic as digital citizenship).
Funding for teacher re‑skilling analogous to past STEM or computer‑science pushes.
Audit requirements for the algorithms vendors sell to schools—bias checks, data‑privacy compliance, accessibility.
Putting it all together
Treat it like a RACI chart (Responsible, Accountable, Consulted, Informed):
No single actor can shoulder the whole load. But when each group sees its lane—and how it intersects with the others—the “Who’s responsible?” question shifts from finger‑pointing to choreography.
___________________________________________________________________________
OK, the AI portion is done. Now, it’s back to me.
One concrete way to start this summer
After almost every post, I implore you to start using ChatGPT yourself and involve your kids in any age-appropriate way you can think of.
This time, I’ll give something super specific that will only work for some of you: This Outschool class.
My high schooler will be sitting at the computer virtually attending this class this summer. If you have a high school-aged kid, sign them up, too, and they can complain about us together.