6 Comments
User's avatar
Harjas Sandhu's avatar

I think AI has exposed the fundamental difficulty all education has faced forever: how do you get people to learn things they don't want to learn? Obviously making them want to learn would be superior, but if that fails, what can you do?

Now, with AI, it really seems like the answer is, "nothing". I vaguely support capping the highest possible scores on all standardized tests - because as is, people end up putting in ridiculous amounts of effort into their grades or SAT scores, effort that teaches them that school is all about getting the grade, effort that teaches them that learning about things that are interesting and useful and practical is a waste of time and that rote memorization and brute force pattern recognition is all that matters. It's Goodhart's Law maxed out to the extreme: how is it even possible that people can make functional businesses on SAT prep courses and classes?

Yes, you have to force children to get a baseline of education. Basic math, basic literacy, basic civics and history. You should also give them ample opportunity to develop their interests, maybe involving forcing them to take like one class on each of the arts or something, and connect them with things that actually have an impact in the real world. But beyond that, more effort should be taken to ensure that the kids actually want to do things - otherwise you increase the number of adults who are high-achieving robots.

Expand full comment
Claus Wilke's avatar

The problem is there is learning and there is assessment, and we're muddling them together. They should be much more separated in my opinion. We're using the threat of doing poorly on an assessment as the mechanism to get students to learn, but if they're being assessed all the time then actual learning (where sometimes you fail) gets dangerous. The charitable view on grade inflation is it gives students more space to explore, as they're not fighting for their life every step of the way.

Expand full comment
Harjas Sandhu's avatar

Yes, and then students begin to think that the purpose of learning is to do better on assessments.

Expand full comment
Sebastian Raschka, PhD's avatar

Hm, yeah, this seems like a real challenge...

In addition, maybe it could be interesting to turn this into an AI study, where students compare different AI models and making this a benchmark? I.e., which model was the easiest to work with to complete the assignment most satisfactorily? How many and what types of interventions were needed, etc.

Expand full comment
Claus Wilke's avatar

The good thing is I'm using R and most language models are not very good at it. ;-)

I'm aware of the "study the AI" type of approaches. To do this right, I'd probably need at least two weeks of instruction time, and it'd substantially shift the emphasis of the class. So far, I've been reluctant to go that route. But I continue to keep an open mind and see what other people are doing and how it's working.

Expand full comment
Sebastian Raschka, PhD's avatar

Hm yeah, it would probably totally change the class etc.

I am in that sense "lucky" that I quit academia a few months before the ChatGPT release 😅 so I never had to experience this issue. I do miss teaching some times though (I just wish it wouldn't involve assignments and grading, haha).

Anyways, best of luck with the class!

Expand full comment