Biases Baked into Algorithms
Research has also shown that as these tools are growing in their scope and abilities to mimic characteristics of human intelligence, their biases are expanding as well.
According to IBM, AI bias refers to algorithms that “produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality.”
AI bias, for example, has been seen to negatively affect non-native English speakers, where their written work is falsely flagged as AI-generated and could lead to accusations of cheating, according to a Stanford University study.
For young Black girls in particular, “Facial recognition and technology may not resemble them. Or there could be biased language models that can perpetuate harmful stereotypes,” says Freeman.
Scientists from MIT found that a language model thinks that “flight attendant,” “secretary,” and “physician’s assistant” are feminine jobs, while “fisherman,” “lawyer,” and “judge” are masculine.
Meanwhile, researchers at Dartmouth found language models that have biases, like stereotypes, baked into them. Their findings suggested, for example, that a particular group of people are either or good or bad at certain skills (assuming that someone holds a certain occupation based on their gender).
Diversifying AI Creators
But bias inside the classroom is not new for Black students and other students of color, making it even more critical for educators and developers to understand the way AI can affect students of marginalized groups.
“Black students face unconscious bias without technology,” Freeman says. “So having developers of AI that do not further perpetuate that bias is important.”
Diversifying the creators of AI has a direct impact on the data that the algorithms produce. Because AI is based on past data, mirroring stereotypes or biases that are already present, they can unknowingly amplify these stereotypes in classrooms.
“AI should only supplement what is being done in the classroom in order to level the playing field for students of color,” Freeman says.
The introduction of AI in classrooms is filled with opportunity for growth. However, its effects on students of color requires a strategic and collaborative approach, Freeman says.
“Equity goes beyond our classrooms and our region, So, policy makers need to get information from the majority of educators to figure out what is needed especially for schools in rural areas,” she says.
AI Guidance for Educators
Currently, there have not been any federal policies on AI in education; although, the U.S. Department of Education (ED) has released guidance on this topic.
Additionally, the Biden administration released an extensive executive order on AI that calls on ED to, among other things, develop an “AI toolkit” for education leaders implementing recommendations from the education department’s AI and the Future of Teaching and Learning report. The recommendations include appropriate human review of AI decisions, designing AI systems to enhance trust and safety, and developing education-specific guardrails.
Some states have provided official guidance for the best ways to integrate AI in the classroom. But mostly, teachers have been left to their own devices to decide whether to utilize it in their instructional activities.
Organizations, like the International Society for Technology in Education (ISTE), have partnered with educators and students to give them the tools to be more efficient and confident with the AI driven technologies. They are focused on giving everyone, especially people from marginalized groups, the access needed to be empowered users of this new technology.
Because AI is not going away anytime soon, it becomes increasingly more important for teachers and students to feel confident in their use of it. As different AI technologies continue to advance, Michigan’s Melissa Gordon hopes “students will have a safe space to experience AI and figure out the best ways it can be used.”