What I should probably tell students about generative AI

Author

Klaus Brunner

Published

2025-05-22

Everything has been said about GenAI already, but not yet by everyone. Here’s what I think students should hear.

This has happened before

Whenever a large-scale technology innovation happens, it’s always claimed to be unprecedented. Nothing like this has happened before! And, in a way, that’s true: there’s always some variation. For instance, one of the amazing things about generative AI is that it seems to threaten human creative work – this hasn’t happened before in the history of automation. But other than that, I’m quite convinced that history’s habit of rhyming remains intact.

Nobody knows anything about the future

In times of great technological change that seem disruptive or even scary, people look for answers. Fortunately, some people are in the “answers” business – quite enthusiastically so. They are very convincing, they have excellent arguments, they cut a bella figura on the TEDx stage and in panel discussions.

The problem is that they’re usually wrong. Some of them may be right: in much the same way two out of ten fund managers will beat the market next year, but you don’t know which ones. I’m not saying you should ignore them. I’m just saying you shouldn’t bet your (future) life savings on them.

Don’t cheat yourself

There’s a famous video of Berkeley professor Brian Harvey on why not to cheat in university.

One recent comment under that video dryly reads: “In a time before ChatGPT existed”. That’s kind of funny, or scary, or annoying – because the point Prof Harvey is making is just as relevant today, probably even more. If you let ChatGPT do your homework or even exams, you’re cheating nobody but yourself. You’re cutting corners so much, you’re learning nothing. You become a thin I/O wrapper around an LLM. Which is the exact kind of thing IT has been automating successfully for many decades, long before language models. Who wants to pay a salary for that?

(Again, we’ve been through a variant of this before. “Googling” used to trigger the same ideas. Why am I supposed to learn anything if I can just google it?)

LLMs aren’t an abstraction layer

This point is specific to software development, I think. Or maybe it isn’t, but I’m not clever enough to generalise my thoughts.

The way we currently use LLMs in software development is by generating code. The very same code we’ve been writing as humans for many years. We’re not replacing that code, we’re replacing a large part of the human work of writing the code.

Importantly, we’re not introducing an abstraction layer like we did when high-level languages freed most of us from writing assembly code on a daily basis. GenAI, so far, is not the long-awaited breakthrough of n-th generation languages. It’s a robot that writes plain old code. It’s your enthusiastic, sometimes smart, sometimes stupid junior assistant. Congratulations, you’ve just been promoted to supervisor: instead of doing all the work yourself, you now have to tell someone what to do and how to do it and check if they’ve really done it. (This is called management, which is much less fun than you’ve been told.)

This is not what an abstraction layer does. An abstraction layer provides a mechanical, tight, sometimes even provably correct mapping of higher-level to lower-level constructs. It’s not clever or creative, it’s perfectly predictable. In other words, the opposite of an LLM. What kind of an abstraction layer has a “temperature” setting to regulate its degree of randomness?

Why does this matter? Because you still have to understand the output of the Large Language Model: the language. Maybe not as often as before. Maybe not in every kind of role that works with code. But I’m sure it’s going to remain necessary in any kind of software development role filled by well-paid humans.

Foundational skills are more valuable than ever

This may be the same point as the previous one, just in different packaging.

In software development, a widespread assumption is that jobs will be cut because LLMs can do all the drudgery of routine coding. And there’s some indications this is already happening. For instance, LLMs are great at rapidly producing fairly decent UIs, especially for prototypes. As someone who passionately hates writing GUI code ever since I had to deal with the awful Microsoft Foundation Classes in C++ back in the 1990s, I’m all for it. But if you’re looking to get maintainable, efficient, correct, production-quality code in complex real-world systems, you must be ready to step down into the engine room every once in a while. Which means you still have to be able to engineer something other than prompts. Use the right tools, understand the details, know what works and what doesn’t.

It’s quite possible that creating software will become more like conducting an orchestra and less like playing the violin or flugelhorn ourselves. But don’t forget that orchestra conductors are highly trained musicians who know how to play at least one instrument and understand the capabilities of all. They couldn’t do their jobs without these hard-earned skills.

My not-so-bold prediction: fewer people are going to develop these skills because it’s time consuming and difficult and not needed for the majority of tasks. Which means that the few who do will become more valuable. Remember that good judgement comes from experience, and experience comes from bad judgement (Mark Twain, supposedly). You probably want to be the person known for good judgement.

Exploit AI to your advantage. It won’t complain.

Today’s LLMs, with all their current shortcomings, are great tools for learning. They can help improve your productivity in many ways, including the productivity of acquiring knowledge and skills. Unlike human tutors, they won’t give up or call you an idiot if you seem slow on the uptake. This is probably how you should use them: to improve yourself.

(I find today’s models quite limited in their ability to condense text without losing important information. Many students happily upload lecture notes and slide decks to let ChatGPT give them summaries. While those summaries sound great, they seem to randomly skip important points. For good grades, I’m afraid you still have to go ad fontes and read those materials your lecturer prepared…)