Last week I was invited to teach a group of 9th graders on the topic of AI. There is a general concern in schools that kids avoid learning by using a GPT [OpenAI, Gemini, Copilot, etc.] to do their homework. But in my opinion, given that AI is now readily available more than ever, we should adjust how we engage the students and what assignments we give them. AI is a tool they should not shy away from. Or they will fall behind. But, they should also have the wisdom to know how to use it. Or they will not grow. So, instead of telling them what to do, I showed them the real deal so they can make their own conclusions. We built an AI together as I told them a story. I hope my account of this lesson helps school teachers come up with ideas for their own classrooms.
To engage the students, I picked "movies and tv shows" as my story theme. I considered several other candidates: from making soccer predictions to personalized shopping but I felt that movies are a great way to engage both boys and girls in middle school and early high school. Besides, I used to work at Netflix 🙂
The fictional story was simple: "I have a bunch of movies that I watched, some of them I liked and some I didn't. I want to build an AI to tell me which movies I will like next. And because I'm lazy, I will ask an AI to help me build my AI! And then I will tell you how that AI got me in trouble."
It's all about data
First, I introduced myself to the kids to give them some clues: where I grew up, where I lived, what I did for work, etc.
Then I asked them to guess my preferences.
They guessed all the movies correctly except "Mamma Mia!". They thought that since I grew up in Greece I would have liked it. (Sorry, kids. I like my beach and tzatziki without the love drama.)
Then I showed them how I organized all this into data I could feed into the Movie AI using a spreadsheet.
I explained to the students that in order for the AI to decide what types of movies I like we need to tell the AI more information about each movie. We create what we call "features" to organize the information. For example "Is the movie animated?" or "is it high budget"? One of the students asked me "how do you tell if it's a high budget movie?". Good question. It gave me the chance to explain that, as the designer of the AI, I got to choose how to define that. In my case, I decided to call it "high budget" if it cost more than $100M to make. I also need to "label" each movie with "like" (check the box) or "dislike" (uncheck the box). For the computer this means like = 1 and dislike = 0.
Making the AI
Then I exposed them to the real code I wrote together with the GPT AI!
Why did I decide to torture the poor students with complex python code? I just wanted them to see how real engineers work. I explained to them that they didn't need to understand the code. I would just explain the high level idea and then run it together.
But by showing them the code, I also had a hidden agenda...
The GPT AI that I used to generate the initial code for the project made a critical error. It literally generated code that reversed my labels: 1's for the movies I don't like and 0's for the ones I liked. So I removed that "dangerous" piece of code.
It also generated code that trained the model only once, without performing proper Cross Validation, which is not a best practice in general. I let that slide however. It wasn't a big deal for the demonstration.
But it gave me the opportunity to explain to the students that if I didn't know how the code worked and I trusted the AI 100% then I would be in a lot of trouble. Maybe not for movies, but what if I trained an AI used in an autonomous car and then the car crashed??
After this, we trained the model and I showed them how training an AI is basically trying to position a line in such a way that it splits my "likes" from "dislikes" the best way possible by reducing the error of its guesses. That's why I had to train it for 1000 epochs to get there.
I showed them this graph:
At the end, I asked them to guess what my AI was. Was it a smart robot? A sentient being? A digital brain? The Matrix? The students had no clue. So I told them it was nothing this fancy. It was just a bunch of numbers that can help me draw a line to divide my data into two groups. We call them weights and biases.
Something like that:
Having fun with the AI
After we trained the AI together we saw how well it predicted these new movies:
Pretty good!
So then I asked the students to hack my AI. How would we do that? By hacking my data of course!
So here is what they did:
After "uncheck Ready Player One" and "check Barbie", etc. they figured to just simply reverse my labels (as shown at the bottom of my list).
Smart hackers!
And after we retrained the AI we got this:
Naruto was the only one who (barely) survived. He is a persistent little ninja after all!
Next Steps
After the class, I worked with the school teacher on follow-up assignment ideas. Here are some:
- Create a new data set that has a similar format to the Movies one where you have to make one of two choices. For example, sports. For soccer you can train an AI for a goalie: "Should I dive LEFT or RIGHT?" based on data from opponent shots where the features are "isRightFooted", "runsFast", "wideAngleLeft", etc?
- The kids then can discuss their data sets with the teacher. They can talk about how to reduce bias in their data or how they could collect and label them.
- Try to play with the code and relabel and retrain the model to see where the AI gets confused and see if you can add extra movie features in the Feature list (or more movies).
Reference Materials
If you need to recreate this experience in your class or you want to have a conversation with your child here are the materials I used:
I hope we learned something useful today,