Image of a robot (artificial intelligence)
Author: Matthew Marino, Co-Principal Investigator of CIDDL;
info@ciddl.org

Artificial intelligence (AI) has changed how we interact with the world. AI gives us the ability to talk into our phones and turn on lights in any room of our house. It can predict what we are looking for during online searches and deliver customized items we might be interested in purchasing to our desktop. Roomba vacuums use AI to process data collected from sensors in the vacuum. They can measure a room, identify obstacles, and maximize efficiency during the cleaning process. Clearly, this technology is improving our lives. Can the same principles be applied to teaching and learning? What questions should we be asking? What concerns should we have about the unintended consequences of AI in education? This five-part series will guide you toward a more informed understanding of the potential benefits and challenges associated with AI in education.

A recent blog post in Scientific American by Dr. Chris Piech of Stanford stated,

Many look to AI-powered tools to address the need to scale high-quality education and with good reason. A surge in educational content from online courses expanded access to digital devices, and the contemporary renaissance in AI seems to provide the pieces necessary to deliver personalized learning at scale. However, technology has a poor track record for solving social issues without creating unintended harm.

Dr. Piech is a lead researcher at Stanford, investigating how autonomous AI agents can act as teaching assistants for students. AI teaching assistants have the potential to reduce educational inequality by providing students with virtual teachers 24/7/365. The challenge for software developers is to ensure the AI provides individualized feedback that motivates and engages students. The agents must understand student progress and then provide the optimal level of support, not too difficult, not too easy, also known as students’ zone of proximal development. This is not difficult for the AI when students are engaged in problems that are easy for the AI to interpret, meaning tasks are simple and linear. However, researchers at Stanford are working on a process to enable AI teaching assistants to provide meaningful feedback as students complete open-ended problems in STEM disciplines. This is challenging because open-ended problems often have complex, nonlinear solutions. You can learn more about the research by checking on this blog.

Take a deep dive. This is an in-depth topic with many potential benefits and challenges. Public Broadcasting System (PBS) FRONTLINE featured a special on AI in 2019. It is worth watching when you have an hour or two to commit.