好色先生TV

好色先生TV

K-12 Curricular Modules

These five modules provide K-12 educators with a comprehensive, project-based curriculum on AI and Machine Learning, specifically designed from a high school perspective. The content progresses from foundational AI concepts like predictive modeling (Regression vs. Classification) and practical tool usage to a deeper exploration of Neural Networks and specialized architectures like Convolutional Neural Networks. The curriculum then delves into the workings and societal implications of Natural Language Processing, Large Language Models, and generative AI art, including the technical mechanics of Diffusion Models and Generative Adversarial Networks. Throughout all modules, there is a central focus on the ethical, rhetorical, and sociological consequences of AI across real-world applications such as public policy, autonomous vehicles, and disaster response.

These modules are presented in more depth during our annual high school educator workshop, and build on a high school class on AI & Society taught by the lead instructors.

Foundations of AIabstract representation of foundational AI concepts

This module provides educators with an accessible entry point into the world of Artificial Intelligence, establishing the basic framework for understanding how machines "think." Participants explore the core terminology of AI through relatable metaphors to define inputs (features), outputs, and the "black box" of algorithmic processing. The session covers the history of AI and challenges educators to define the shifting boundary between human intuition and machine capability. By examining the economics of efficiency, participants begin to see AI not just as a tool, but as a system of choices with inherent social consequences.

 

Module Content
Driving Questions:
  • What is AI?
  • What are the concepts needed to understand how AI works?
  • How does AI shape societal decision making?
  • What are constraints to its use?
Goals:
  • Provide teachers with a foundational understanding of how artificial intelligence models work, function, and shape human relationships.
  • How to communicate the information to students in effective hands-on ways.
Learning Outcomes:
  • Understanding the core concepts and terms associated with machine learning and artificial intelligence.
  • Understanding how AI models work using specific case studies to help teachers create informed digital citizens.
light bulb iconWhat is AI?

AI stands for artificial intelligence. The words “Artificial Intelligence” are used a lot. As a result, it can be difficult to determine what AI is (and what it isn’t).  One definition that works well is this:

"AI is a computer doing something that we believe humans can do better."

What people deem as AI today, may not be what we describe as AI tomorrow. As such, AI is a moving goalpost. In other words, some people may describe a certain technology as AI, while others may choose not to describe the same technology as AI. Moreover, what people deemed as AI twenty years ago, may no longer be considered AI. However, there are some things that most people currently universally consider to be AI: “Computers drawing images from a description,” “Computers writing music given a prompt,” “Computers writing an essay,” and even “Computers driving a car.”

light bulb iconWhat is Machine Learning?

Note: This explanation was inspired by . For a deeper exploration of this topic, please see .

So far we know that AI is a moving goalpost, and AI encapsulates all those things that a computer does that we believe a human can do better.

Machine Learning (ML) describes one category (and a large one!) of artificial intelligence. In order to define ML, it is helpful to consider two possible approaches to getting computers to do human-like tasks. We will call these approaches (1) knowledge-based AI, and (2) example-based AI. For the sake of an example, let’s consider one task that humans complete fairly well: looking at X-ray images of bones and determining whether there is a fracture present (whether the bone is broken).

Consider Approach 1: Knowledge-Based AI

With this approach, computer-scientists provide all of the rules associated with what a fracture might look like. After speaking with many radiologists, orthopedic surgeons, and X-ray technicians, a computer scientist determines exactly what will warrant a fracture on an X-ray image. They then painstakingly program all the details of what to look for when given an image, and hope for the results of their program to produce accurate results.

The approach sounds like it should work. The thought goes, in order to learn how to do something well, talk to the experts, the people with the knowledge, and learn from them. This approach was taken by early computer scientists in the 1960s, 1970s and 1980s.

Unfortunately, this approach did not work very well. Every time computer scientists thought they had a system that could accomplish a task such as the bone-fracture classification problem, their system would make a mistake. They would speak to the experts about the mistake, and they would learn that there was something that the experts forgot to mention during interviews. The computer scientists would learn that there was an exception to the rule. There was something that they had not explicitly told the computer to look for, and, as a result, the computer would classify an item incorrectly. Artificial intelligence seemed to be impossible. Knowledge-based AI, the leading approach to making intelligent machines, had failed.

Then, in the late 1980s and early 1990s, computer scientists tried a different approach. They tried approach 2. They tried example-based AI.

Consider Approach 2: Example-Based AI

With this approach, computer scientists provided machines with lots and lots of data and hoped the machine could use mathematical feedback systems, similar to how neuroscientists believed our brains learned, to find a pattern.

Let us return to our X-ray example. Imagine thousands of X-ray images of bones were given to the computer along with the diagnosis that had been provided by physicians. It would look like this: “Here is picture 1. It is a fracture.” “Here is picture 2. It is a fracture.” “Here is picture 3. It is not a fracture.” And so on. No rules were given to the computers on how to accomplish a task. Instead many, many examples of the task being done correctly were given to the computer, and it was asked to determine the pattern itself.

Did the new approach work? Simply put, the results were astounding. The system worked; it worked really well. This new type of AI, example-based AI, seemed to be the ticket to success! In fact, it was so successful it was given a new name: Machine Learning. Machine Learning is example-based AI, the approach to building intelligent computer systems simply by giving the system data and letting it mathematically determine the patterns that exist.
light bulb iconWhat is a Model?

Computer scientists like to use this word a lot in the world of machine learning. For example, you might hear something about the designers of ChatGPT, Claude, or Gemini having developed a new model that is more powerful than their last model, and wonder, “what even is a machine learning model?”  Simply put, a “model” is a system that has been created to take input values and produce an output prediction.

Consider the following prediction-system that may be created to help a realtor determine a house price, given the number of bedrooms that a house has. Here, the input value for the system is “number of bedrooms” and the output prediction is “house value.”

To build such a system, a realtor may collect some data and put it on a simple two-dimensional graph, with the horizontal axis representing the number of bedrooms, and the vertical axis showing the house values for houses that have sold in the past in a given market. The data might look like this:

Prediction model 1

From this data, a realtor could create the prediction system by trying to match the pattern seen in the data. One such system might look like this:

Prediction model 2

Now, this particular prediction-system is one you may be familiar with: it’s a line of best fit. This prediction-system is a model. In fact, it is one of the simplest machine learning models we can make. 

What makes a model like this useful, is that we can make predictions about new data, simply by using our previous data and pattern, and a new input variable. In this case, a realtor may want to sell a house in the market that is represented by this model, and by providing the number of bedrooms, they can predict the selling price of the house. This might look like this:

Prediction model 3

Notice that there was no previous data on the cost of house for the input number of bedrooms, but the model was still able to make a prediction. This is a simple example of a machine-learning model making a prediction. A more complex model could use two variables as inputs, say, number of bedrooms and number of bathrooms, to make a prediction about house price. An even more complex example could have ten input variables to make a prediction. Very complex models may have thousands or even millions of input variables to help make a prediction. Even though we can’t visualize this easily on a graph, we hopefully can understand how input variables, accompanied by a model, can help us make predictions.

icon of an open bookHow is AI Useful for Societal Decision Making?

AI and Machine Learning is used everywhere to help humans make decisions. Because computer scientists have helped computers learn patterns from large amounts of data, they can provide large amounts of data on any problem and build models to help make predictions. Below are some of the areas where AI helps inform societal decision making.

Disaster Relief and Resource Allocation

Humans can provide computers with drone footage of flood areas, where humans have labeled pictures of building structures as completely destroyed, partially destroyed, or undamaged. AI models can be built to be used after a storm hits, and feed unlabeled pictures of structures taken from drones. Models can easily predict which pictures represent the most damaged homes and prioritize sending more aid to these regions, even without humans asking for help. Click here to learn about how AI can assist with resource allocation during disasters.

Medical/Public Health

Humans can provide computers with information about patients along with diagnoses and create models that can predict a correct diagnosis for a patient simply given their presenting symptoms. Another model could be used to predict the way that diseases spread in a population given information about past diseases and the patterns that are learned. . 

Transportation

Humans can provide information from vehicles with sensors that record how cars successfully drive on a road given a set of conditions and create a driving model. This model can be used for a car to predict how to drive itself. . 

icon of an open bookDefinition of Key Terms

To better understand AI, it is helpful to understand the terminology used. Here are some key terms to get you started:

  • Big Data: Complex datasets that exceed the processing capacity of traditional data management tools. Big data is characterized by volume (a lot of data), velocity (the high speed of data creation and processing), and variety (diverse data types like text, images, and video).
  • Input: What goes into our model to help us predict, often called the features (ex. number of bedrooms in a house)
  • Black box: In engineering, the term is used to describe any device whose inner workings are hidden from the user. In ML, this may be a particular algorithm or set of steps, whose inner workings are mysterious to the user. A black box allows us to hide unimportant details and is very important for human problem solving.
  • Output: The prediction from our model, often called the label (ex. the price of the house)
  • Model: The “machine” associating the input to a predicted output by finding the pattern in the data.
  • Training: When a model is being built, and data is provided to help it learn the pattern, we say the model is being “trained”. 
  • Inference: When a model is being used to make a prediction about data it has not yet seen (or has not been trained on), we say the model is making an “inference”.
For a more comprehensive list of industry terms, please refer to .
icon of an open bookReadings

Note: While many of these are available to the general public, some of the links below require subscription accounts.

The Fourth Industrial Revolution and Internet of Things (IOT)
  • The Washington Post
  • WIRED
  • The New York Times
Case Study: AI in the Film Industry
  • Los Angeles Times
  • Los Angeles Times
  • Los Angeles Times
Human Decision Making and Sports
  • FiveThirtyEight
  • Sports Illustrated

Regression/Classificationabstract representation of regression and classification

Building on foundational concepts, this module dives into the mechanics of how AI models make predictions. Educators explore the distinction between Regression (predicting continuous values, such as life expectancy) and Classification (categorizing data into discrete labels). Using hands-on tools like Teachable Machine, participants build their own models to see these concepts in action. The module emphasizes the importance of evaluation, moving beyond simple Accuracy to master the Confusion Matrix. Finally, the module contextualizes these technical foundations by examining the social consequences of AI in real-world applications, including public policy, disaster management, and economic disruption.

 

Module Content
Driving Questions:
  • How do regression and classification models take an input and produce a prediction (output)?
  • What are the social implications of the results (labels) of these types of models?

Goals:
  • Provide teachers with a better understanding of these basic model techniques used in machine learning.
  • Based on this understanding, consider how AI classification/regression models can enhance human decision making.
Learning Outcomes:
  • Fully understand how predictive models using classification/regression shape in political culture, public policy and the criminal justice system.
icon representing activities Learning in Action
Accuracy Demo (using playing cards)

Below is a simple activity you can do using a standard deck of playing cards (jokers removed) to demonstrate the issues with an often-used metric for machine learning algorithms: accuracy.

Materials Needed: A standard deck of playing cards

Learning Objective: Understand the concept of accuracy as it pertains to model predictions

Google's Teachable Machine

Link
Cost: Free

Learning Objectives:

  • Key Concept: Understand classification and how AI models use data to classify images, sounds, or body poses
  • Real-World Application: Help students visualize the capabilities of AI and practical use-cases
  • Critical Thinking: Identify the inherent limitations and "fail points" of recognition models.
💡 Activity Idea: Getting Started
  • Live Demo: Use webcams to classify classroom objects (e.g., pencils vs. pens, papers vs. folders)
  • Dataset Upload: Use a to show where computer vision systems struggle.
  • Discussion: Ask students why the model might confuse a book cover with a movie poster.

Formula Bot

Link
Cost: Free to start with paid plans for additional capabilities

Learning Objectives:

  • Key Concept: Explore regression by using natural language to generate complex formulas that analyze data and predict outcomes
  • Interact with Large Datasets: Query complex datasets to find specific insights without needing to write manual code.
  • Evaluate Data Predictors: Interpret which variables (like GDP or schooling) most heavily influence a prediction, such as life expectancy.
💡 Activity Idea: Getting Started
  • DatasetUse the
  • Discussion: Have students ask FormulaBot: “Which features are the best predictors of life expectancy?”
icon of an open boxOpening the Black Box

Regression and Classification techniques are easy ways to teach students about basic supervised learning machine learning models. Students will learn that these predictive models have two main phases:

  1. Training Phaselabeled data is given to the machine so that patterns can be recognized.
    1. Example 1: A dataset with prices of houses and number of bedrooms in a house is given to a computer. It learns the pattern between number of bedrooms and the house price.
    2. Example 2: Spam emails are labeled by a human over several months (“This email is important” or “This email is spam”). Computer recognizes patterns in emails.
  2. Inference Phase: Unlabeled data is given to the trained machine, and it will attempt to make predictions about the data it is seeing.
    1. Example 1: A new house is going up for sale with a certain number of bedrooms, the machine automatically determines the price of the house based on prior labeled data it was trained on.
    2. Example 2: Emails come into the inbox that have not been labeled, and the computer automatically sorts them into "spam" or "important".
Note the distinction between regression and classification; this distinction comes from the values of the predictions.
Slide that shows the differences between regression and classification
caution iconRisks and Considerations

Bias

A model’s ability to make successful predictions hinges on the data that was used to train the model. If the data used to train a model is not varied enough, it can be biased toward a certain result. 

Consider a model that is used to predict a person's favorite musical artists.  Such a model would require input from locations across the nation.  If pollsters asked only those living along the East Coast, the collected data would be skewed or biased; it would amplify the preferences of those living along the eastern seaboard and miss the preferred artists of those living anywhere else.  If AI models make predictions using this East-Coast-heavy data set, its predictions would reflect the same bias found in the original data.

Overfitting and Underfitting

To understand overfitting and underfitting it is useful to think about an analogy first. Consider the following three scenarios for T-shirts a person might wear:

  1. Generic-Sized T-shirt: A person looking to get a new T-shirt might go to a department store and choose one out of three sizes: small, medium and large. The T-shirt may not be a perfect fit, but a person might choose a given size, say, medium, and be happy with the way it looks. If they no longer want the T-shirt, they could give it to a similar-sized friend who could use the T-shirt as well. Stores like to carry these generic sized shirts because they fit a large group of people pretty well.
  2. Custom-Tailored T-shirt: Now consider a custom-tailored tee shirt. This type of T-shirt might be created after a particular person gets their body measured by a tailor, and a T-shirt is designed to perfectly fit the wearer. The person may be very satisfied with the way it looks and fits them. However, there is a drawback: this T-shirt only works for the wearer or someone with the exact same body shape as them. A store would not like to keep a custom-tailored T-shirt in their inventory because so few people would fit into the shirt.
  3. XXL Free T-shirt: Lastly consider the scenario of a free T-shirt being provided to everyone at a sports stadium for a special fan appreciation event. Since the stadium has no idea what size shirt a person might fit, they place an XXL T-shirt on the back of every seat, knowing that most people can fit into it, even if it is a bit big. 

These three T-shirt examples are analogous to issues we see in machine learning models. Consider the analogous machine learning models making predictions on X-ray images.

  1. The model fits most data pretty well: This model has been trained on X-ray images so that it understands the pattern in what is deemed an image of a fractured vs. a non-fractured bone and can generalize well to images it has not yet seen. It may not fit every image perfectly, but it fits most images pretty well.
  2. Model is overfit: This model has been trained on certain X-ray images, but does not generalize well to X-ray images it has not seen. Perhaps the model was built based on X-ray images taken in a specific hospital. Each one of the images from this hospital was slightly brighter than those found at other hospitals due to the exposure of the film. It might get perfect results for the X-ray images in the training set, but when images taken in a different hospital were put into the model, it was not very successful at determining whether an image represented a bone fracture or not. The model was overfit.
  3. Model is underfit: This type of model may not have been trained enough. It does not do a good job at fitting the data at all, and will not yield good results with any X-ray images. Perhaps only three images were used to train the model, one of a broken finger, one of a broken rib, and one of a broken wrist. Since the model did not have enough data to learn the pattern of fractured vs non-fractured images, it does not predict well.

In summary:

  • If a model is overfit: it will not be good at making predictions outside of the data it has been trained on.
  • If a model is underfit: it will not be good at making predictions at all.
  • If a model fits data well: it can make good predictions on data it has been trained on, but also generalize well to new data.

When we consider a model’s performance, we should ask whether they fit data well, or if they are overfitting/underfitting data.

Over-reliance on AI

When making decisions with AI, it is important to always remember the role of a human with these systems. In limited cases, AI systems can be used to make decisions independent of humans, once they are sufficiently trained, deemed to be sufficiently unbiased, and deemed to generalize well to new data. However, in most cases, AI systems are best used in conjunction with humans. AI systems can make predictions or suggestions, and humans can use their powers for taste and discernment to determine if the prediction or suggestion is one worth utilizing.

In summary: AI systems are best used alongside humans to aid their decision making, not replace a human's decision-making .

icon of an open bookReadings

Note: While many of these are available to the general public, some of the links below require subscription accounts.

Background and Context
  •  –The New York Times
Case Study: Flint, MI
  • Los Angeles Times
  • Los Angeles Times
  • Los Angeles Times
  • Los Angeles Times
  • POLITICO
Case Study: Pittsburgh, PA
  • The Guardian
  • PublicSource
  • 90.5 WESA
  • – 90.5 WESA
  • – NRDC
  •  – Pittsburgh's Public Source
  • – Pittsburgh Post Gazette
Case Study: Newark, NJ
  • The New York Times
Classification: The Criminal Justice System and Decision Making
  • TEDx (Video)
  • NBC News (Video)
  • Wall Street Journal (Video)
  • WIRED (Video)
  • TEDx (Video)
  • The Atlantic (Video)