Case #17: AI To The Rescue - Solving The Academic Performance Puzzle vs. Privacy
The development of artificial intelligence (AI) systems and their increasingly common use in society, both privately and publicly, gives rise to ethical dilemmas and difficult questions.
Following the shutdown of schools during the COVID-19 pandemic, an inner city public high school was failing to meet state standards. The student dropout rate is now 12% – an all-time high. While the school’s focus is on preparation for college, it offers multiple diploma tracks including traditional, traditional with academic endorsement (college bound), vocational and alternative. Criteria for college bound students are completion of 26 credit hours, achieving a minimum GPA of 2.5 and ACT sub- scores of at least 17 English and 19 Math. Students at the school were having difficulty achieving these standards with only about half graduating on time. This was significantly lower than the state’s overall 75% on-time graduation rate. The school board feared a takeover by the state board of education and therefore embarked on a plan of action to improve educational processes and student performance.
One bright spot for the district was the recent hiring of a relatively young, energetic, and progressive high school principal, Dr. Brightworth, whose previous high school was the highest performing in the state. Her previous school is in a suburban area with access to university labs and close proximity to expanding high-tech companies along Route 128 – dubbed “America’s Tech Highway”. Dr. Brightworth was hopeful for the school’s future and diligently searched for a way to turn things around.
She came up with a novel idea and met with the school board to obtain its approval. She suggested that they take advantage of the extensive databases of student behavior, performance, disciplinary actions, attendance records and other demographics and metrics, which the district had been collecting over the years. Prior to Dr. Brightworth’s arrival, the district had implemented security measures to address the risks associated with school shootings. One significant step was the issuance of scannable ID cards which students used to open various doors, confirm attendance, purchase food, use restrooms, unlock computers, etc. Additionally, the school’s Wi-Fi network tracked students’ internet use on personal devices and school issued laptops. Students’ movement around campus was also traceable. Large quantities of information were captured and stored in the district’s databases. Dr. Brightworth shared that her previous school used this information frequently for various reasons, most importantly to identify possible factors and causes of poor performance and dropping out and then to implement plans and processes to reduce these problems.
Dr. Brightworth secured the school board’s approval and funding to hire a technology company to analyze all the historical data. She solicited bids, interviewed many companies and hired HAL-AnalytIcs, who promised insights into student performance using artificial intelligence. With the help of teachers, administrators, counselors and school psychologists, specific goals were formulated which led to rules and algorithms with which to analyze the data. Finally, the scope of work was developed:
Using AI tools, analyze and correlate collected historical data, performance data, and dropout information in order to identify “at-risk” students.
Equip teachers with detailed information enabling them to apply resources and personalized learning plans for at-risk students, including regular follow-ups with students, counselors, social workers and parents, and adjusting student workloads and schedules.
- Identify new metrics such as employment, extracurricular activities and school transportation (walking vs. bus vs. personal cars) and determine which may be causal factors for poor performance and/or dropping out of school.
Because Dr. Brightworth believed the situation was dire, knew the difficulties of gaining consensus for new programs, and already had school board approval, she did not notify students or parents of theHAL-AnalytIcs contract.
HAL-AnalytIcs began with a historical look-back analysis of risk predictors and demographics including race, ethnicity, gender, mobility, address, employment, health care, home life, English language proficiency, and more. When available, survey data on sleep habits, sexual activity, drug and alcohol use were also analyzed. Academic factors such as grades, standardized test scores, history of disciplinary issues, and attendance were obviously included. Finally, data related specifically to the teachers and staff were incorporated. This included information such as academic degrees and certifications, continuing education, percentage of students failing/passing a teacher’s class, years of experience, teacher attendance, pay, benefits and time off.
HAL-AnalytIcs then collected new data for current students and, using AI software, established correlations between previously poor performing students and dropouts vs. current students.HAL-AnalytIcs was able to produce its own data metrics and generate inferences that would not have been possible without inputting the school’s original data into its AI algorithms. They quickly copyrighted this algorithm, knowing that it could be used in future business opportunities with other school districts across the nation.
HAL-AnalytIcs was very successful at identifying key indicators that, when considered collectively, predicted with significant accuracy, whether a student might perform poorly or drop out. They were also able to identify, previously overlooked, factors like poor nutritional options in school.
HAL-AnalytIcs provided teachers with generic profiles for at-risk students to help them understand why an individual might be struggling and it suggested targeted action plans. This was well received by teachers who were familiar with Individual Education Plans (IEPs) for special-needs students but now had specific IEPs for regular at-risk students. These new IEPs provided measurable benchmarks, checklists, tutoring suggestions, customized assignments, guidance for involving psychologists or social workers and even templates for providing parent feedback. Teachers were required to follow the recommendations made by HAL-AnalytIcs. By the end of the academic year, Dr. Brightworth’s initiative appeared to have been successful – decreasing the dropout rate to 7% and increasing the on- time graduation rate to almost 70%!
The success of HAL-AnalytIcs’ approach caught the attention of the local media and parents. At school board meetings, during public comment periods, numerous parents raised concerns.
One mother objected saying, “You work for us! We voted you in to represent us, and now you are going behind our backs. By hiring this so-called AI company, you have pigeon-holed my son! You have lumped him in with all the disruptive, autistic, ADHD, and other mentally challenged kids that require these IEPs. You did not consult with us when you mainstreamed these difficult kids into the classroom with all the normal kids. Those special needs kids take up all the teachers’ time. No wonder the school is failing! Can’t we go back to the good old day and focus on reading, writing and arithmetic?”
Bewildered, Dr. Brightworth thought that the community would be delighted with the results and could not believe the firestorm this created. Don’t they understand, she thought, that AI was helping to modernize and improve education? Numerous teachers advocated for the program, telling all who would listen, that this was nothing new. These action plans, they said, have been used for years except now AI has, by analyzing actual data, simplified the analytical piece, helped to identify at-risk students and provided customized IEPs. This, they argued, freed teachers up to do what they are paid to do – teach!
Discussion Questions:
When does the use of personal information, by AI, become wrong or unethical?
Does the end (improved education) justify the means (exploitation and use of private Information)?
Who should be responsible and accountable for such decisions?
Are the concerns the same in both private and public-school settings?
In a world of educational competition, is it ethical for schools to use AI similarly in the admission process?
Should parents and students have access to their “risk profiles”?
Should students have the right to appeal and/or opt out?
HAL-AnalytIcs stands to generate significant revenue from the exploitation of students’ data, so should the school district or students be compensated in some way? Or should they be satisfied with the improved educational opportunity?
What about use of AI in other industries?
In medicine, can providers deny patient access to certain treatments based upon an AI analysis of their lifestyle?
Should insurance companies be allowed to deny insurance or charge higher premiums for individuals who are deemed high risk using AI?
Is the fear and concern surrounding AI justified?
In general, what are some ethical considerations related to the inevitable and widespread implementation of AI in our society? Worldwide?