Teaching data visualization in the time of generative AI
If students want to cheat with AI, the least I can do is make them own up to it.
There are dark clouds on the horizon for my class on data visualization. I enjoy teaching this topic, but the entire setup for my class is being upended by generative AI. I no longer know how to create meaningful assignments. How can I assess whether students have learned anything when they can complete entire data analysis projects with a quick request to ChatGPT?

My class is about hands-on working with and visualizing data. It inherently requires that students work on assignments at a scale of multi-day projects. Students need to familiarize themselves with a dataset, perform some exploratory investigations, create visualizations, and then document what they have done and what they can conclude from their analysis. This type of work can only be carried out in the format of a take-home assignment. And it is also something that ChatGPT or Claude Sonnet could crank out in minutes in response to a simple prompt.1
In reaction to the rise of generative AI, some teachers have transitioned their assessments to either in-class exams written by hand or oral assessments conducted in person. Neither approach works for me. The main learning outcome for my class is the ability to produce a compelling report that combines written text, computer code, and visualizations of the dataset analyzed. This skill cannot be tested with a hand-written, hour-long exam. An oral exam could make sense—I could ask students to walk me through their analysis and explain step by step what they did and why—but oral exams don’t scale. I have a hundred students in my class. I cannot possibly examine every single one of them in person.
Until recently, I felt like giving up. The only way forward I saw was to ignore the issue, teach the class as I always have, and let the dice fall where they may. Ideally, students would work on their assignments with minimal reliance on generative AI, because I would ask them to do just that. But of course I would not have any means of enforcing this behavior. And I don’t believe in AI detectors. The last thing I want to do is argue with students over whether or not they have employed AI to complete a given assignment.
However, last week I had an idea that hopefully will make the AI situation a bit more bearable. At a minimum, it will require the students to reflect on their AI use. Starting next spring,2 I will ask the students to declare, at the end of each assignment, how they used AI models in the preparation of their work. This idea is inspired by similar policies now put in place by machine learning conferences, see for example here for NeurIPS. I can’t prevent students from turning off their brain and letting the AI do all the “thinking,” but I can make them document what they did. This should prompt them to reflect on how they’re engaging with AI. And because in my class students peer-grade each others’ assignments, I hope it may make them feel uncomfortable having to admit to their peers that their entire assignment was written by AI. Maybe this will convince them to use AI a little less.
Of course all of this is based on the honor system. A student who intends to lie and deliberately obscure that their assignment was AI generated has that option. There’s not much I can do about that. But then, even without AI, students can ask their best friend or brother or hired help to write their essays and pretend it was their own work. Students who want to cheat will always find a way. There’s little I can do to prevent every such possibility. I see my job as teaching to the students that actually want to learn. My primary goal is to create a learning environment where students are motivated and inspired to make an honest effort. I hope this new AI policy will help me achieve this goal.
I don’t actually think these AI-generated analyses and visualizations are very good, but they’re usually good enough to get a decent grade in my class. At a minimum, they tend to be better than what the weakest students in the class produce.
That’s when I’ll teach the class again.

I read a study recently that measured cognitive engagement while using AI. One group started with AI and another group had to first think about how they wanted to use AI. The AI-right-away group never recovered in their levels of cognitive engagement. So I'd be careful to avoid encouraging that behavior when you ask, "how they used AI models in the preparation of their work"
Also, perhaps problem finding might be an option? Idk what that would be in a statistical context, but in say a geometry context you might say to find 5 examples of where the Pythagorian theorem could be useful. What problems might you have in applying it? And so on. It's getting them to think in a way similar to how they would if they were teaching someone else. It also gets them used to using that lens in the real world, and it is (probably) harder to fake using AI.
My own days of giving/grading exams are long past. But this present discussion over whether students should use AI to execute class assignments reminds of the early 1970s, when handheld calculators (HP-35) first evolved. Students then started claiming that learning to multiply or divide by hand had become a waste of time -- calculators were faster and more accurate. Here we are decades later, and contemporary students are no doubt claiming that using AI is faster and better than trying to learn to craft their own scripts. [Alas, that may always be true for them.] Fortunately, real learning can still take place outside the classroom. Because your lecture and assignments are freely available online, many self-motivated 'students' now have a chance to learn how to do data visualization by hand.