AI Isn’t the Problem. It’s How We Use It, Especially in Schools
In “Straight Talk with Rick and Jal,” Harvard University’s Jal Mehta and I examine some of the reforms and enthusiasms that permeate education. In a field full of buzzwords and jargon, our goal is simple: Tell the truth, in plain English, about what’s being proposed and what it might mean for students, teachers, and parents. We may be wrong and will frequently disagree, but we’ll try to be candid and ensure that you don’t need a Ph.D. in eduspeak to understand us.
Today’s topic is artificial intelligence.
— Rick
Jal: Is artificial intelligence going to revolutionize education? Or will it go the way of educational radio, the CD-ROM, and the many other technologies that were introduced with great fanfare but ultimately did little to change the fundamental enterprise of schooling?
Since I’m a luddite by nature, I’m going to challenge myself and start by arguing the positive case. What we’ve seen thus far, despite all the sky is falling jeremiads by newspaper columnists everywhere, is that artificial intelligence can complement but not replace quality human intelligence. ChatGPT synthesizes material from many existing online sources and integrates it into one coherent product. But if you are the one creating original thought — the writer, scholar, or student developing a piece of original work — then you have little to fear in being replaced by ChatGPT. And the more original your thinking, the less replaceable you are.
The problem, then, is not the technology, but the kinds of tasks we ask students to do in school. As generations of research have shown, much of what we give to students asks only for fairly low-level comprehension and rote application. And that’s exactly what artificial intelligence can do well. ChatGPT is really good at the five-paragraph essay. And what that tells us is not that we should ban the technology but that we need to change the task.
The world we want to get to, then, is one in which we are equipping students to use ChatGPT and other AI tools in the way that professionals would use them. They may help you with the background research or generate the initial template for a presentation, but, at the end of the day, you are the one who is responsible for producing quality, original work. Just as calculators are efficient tools for those who work with numbers, AI could become a tool for those of us who work with words.
But … and here is the rub … this depends on the nature of the education you are purveying and the kind of environment you’ve created. If you are teaching a 1,000-person college class, and there is an antagonistic relationship between the professor and the students, and the professor assigns the same shopworn essay topics year after year, then students will use ChatGPT to cheat, much as the technophobes fear. But, if students are producing original work, in smaller classes with stronger relationships, the kinds of places where teachers and professors are functioning more as a coach than an opponent, then ChatGPT will be used more as it was intended, as a tool to facilitate the creation of quality product. The freakout over the technology is more revealing of the nature of our educational institutions than it is about the technology itself.
What do you think, Rick — optimist or pessimist about AI?
Rick: A little of each, I guess. For starters, I should say that I’m less troubled than you are, at least in principle, if some instruction entails “low-level synthesis and information.” We need to get students grounded and situated. It’s like learning to play ball or the guitar, where early learning is low level and pretty rote, and that’s fine — so long as it’s engaging, vigorous, and intentional. But there has to be a purposeful progression in which learners are consistently challenged and taught to elevate their game.
Too often, instruction doesn’t do that. And that’s where your point about the five-paragraph essay hits home. Students get a template, mimic the formula, toss off a halfhearted thesis, insert some cut-and-pasted content, and call it a day. This is patently pointless. Add AI to that equation, and the only real change is that the student delegates the cutting and pasting to AI. They learn less — but not a lot less. And that’s because they weren’t learning much in the first place.
In this sense, AI does less to solve the problem than to illuminate it. How should students learn to write a five-paragraph essay? It’s not by filling out a page but by engaging mindfully with each part of the writing process. Identifying and sharpening a thesis should be a conversation with peers or a teacher. The supporting of claims should involve offering ideas, getting feedback, revision, and then more feedback. A conclusion should be presented orally, with attention to coherence and polish. Editing should be done with a pen in hand, over a printout, while students discuss sentence flow and word choice. Pursued this way, the five-paragraph essay is a workshop in good writing.
In this kind of writing process, AI can be a useful tool but not a substitute for wrestling with essential skills and knowledge. It’s a lot like the calculator. The calculator is a terrific device for allowing high schoolers in trigonometry or calculus to spend less time on familiar calculations and more mastering new knowledge and concepts. But that presumes that students have already mastered computation. If K-5 teachers just allow students to solve problems by punching the numbers into a calculator, then high schoolers wind up using their devices as a crutch — not a tool. And that means they never really understand what they’re doing.
That is decidedly not where we want to be with AI. You started by channeling your techno-optimist self, pal. How else might you think about all this?
Jal: You argue, rightly in my view, that when you are learning a skill, you don’t initially want technological help. So I agree no calculators for beginning math students and no ChatGPT as you are first learning to write. But what if the problem is deeper than that?
Consider GPS. This is a technology that was introduced in my early 20s and became ubiquitous in my early 30s. (I’m 45.) Before GPS, it took longer to figure out how to get to places the first time. And it was sometimes a little harrowing: You’d get lost; you’d be trying to read the map and drive at the same time. It wasn’t so easy, but once you’d been a place a few times, you knew how to get there.
With GPS, the first ride is smoother. Easier to get there, much less likely to get lost. But, at least if you are like me, you can go places many times and still not really know how to get there. When your brain doesn’t have to do the work itself, it doesn’t make the needed connections and thus it doesn’t develop, perhaps even atrophies.
As AI engines improve, a GPS-style future seems possible. Writing is hard. It has a lot in common with trying to drive someplace new: lots of wrong turns, backtracking, and getting lost on your first few tries. Harrowing. But, like driving, with practice, you get better at it — more able to navigate, less freaked out when you get lost, and maybe (with time) more able to get where you are going the first time. But if you aren’t the one piloting the vehicle, the AI is the one learning, and you are just sitting in the passenger’s seat.
We haven’t even talked about all the other uses of AI — to write songs, to serve as a medical diagnostician, to screen job applicants, and many more. There is much more to unpack here than we can do in one column. But I think we can see one broad lesson from past technologies that is likely to apply: You want to be the one who is using the technology rather than having the technology use you. In other words, we can imagine a world where most kids are listening to AI-developed songs, while some (likely advantaged) kids are learning how to use AI software to write songs, learning editing software to mix together different beats, and so forth. As AI becomes more ubiquitous, education should prepare students to a) learn how to use these tools effectively; and b) how to exercise judgment about why, when, where, and under what circumstances they should be deployed.
Rick: You’ve flagged two big points here that don’t get enough attention. One is how technology insulates us from the larger world. The GPS illustration is a great one. Technology makes things easier, quicker, and more accessible. I remember when getting money out of the bank was a chore. ATMs, debit cards, and e-cash have changed that. That’s great. But those benefits do come at a cost. Going to the bank, finding a certain book, or tracking down an album used to require going out, visiting libraries or record stores, and talking to people. Now, we manage money, get books, or find music in a series of transactional, impersonal swipes during which we go nowhere and meet no one.
And then there’s the difference between using AI as a tool of learning and the ways in which students will use AI after graduation. When I argue that AI mustn’t be a shortcut around essential skills and knowledge, some techno-optimists tell me that I’m worrying unduly and that, “If a student writes a paper collaboratively with AI, that’s fine — that’s what they’ll be doing soon, anyway.” Well, for one thing, we’ve a lousy track record of predicting how technologies will play out or even which skills we’ll need to use them effectively. Keep in mind that, less than a decade ago, it was predicted that driverless cars would eradicate three million truck-driving jobs by 2025 or 2030. Now? It seems like those lucrative jobs may be with us for a while yet.
This post originally appeared on Rick Hess Straight Up.