Artificial intelligence (AI) has gradually been accepted by colleges and universities as an effective tool to automate a number of tasks effectively and efficiently. Chatbots can answer students’ questions about class schedules or check in with them on their sanity. AI-generated emails can remind students of important deadlines, prompt them to register for classes, turn in assignments, and pay fees on time. And, in one particularly controversial use, AI-based software is increasingly able to detect plagiarized assignments.
A Georgia Tech professor even used AI to create a virtual teaching assistant, named Jill Watson. It turns out that “Jill” receives very positive evaluations from students.
Higher education has been making progress since its earliest forays into digital transformation which involved automating daily tasks, digitizing workflows, developing more complex datasets, and creating dashboards to enhance their analytics. Now, institutions aren’t just using technology to do the same things better. They deploy AI to do better.
University leaders have learned that AI can do more than just produce routine prompts and generate useful tips. They are beginning to use technology to address some of their biggest and most persistent challenges, including core issues like increasing enrollment, improving student retention, and allocating financial aid.
And as AI expands into these core academic practices, new concerns are also being raised about the tool’s threats to privacy and its vulnerability to systematic bias.
According to Arijit Sengupta, founder of Aible, an AI company based in San Francisco, colleges and universities are beginning to catch up with other industries like banking and healthcare in using AI to impact key performance indicators. .
Sengupta told me in a recent interview that he now has between 5 and 10 higher education clients who are using AI to help them make progress on key outcomes such as increasing their yield to candidates, the prevention of attrition from the first to the second year, the institutional financial targeting of aid and the optimization of the solicitation of alumni donors.
Sengupta knows that university leaders have often been disappointed with the results of previous AI projects, and agrees that in many cases AI is a waste of time and money because it is not designed to achieve tangible objectives and specific results that are most important to the institution. .
With this in mind, Sengupta offers its customers the following guarantee: if Aible’s AI models and prescribed interventions do not produce value within the first 30 days, the customer will not be charged. He told me that many university officials think they need to understand logarithms and AI models before they can apply them, but according to Sengupta, they’ve got it all wrong. “Our approach is to teach the AI to ‘speak human’, rather than the other way around.”
Once an AI model has sorted through the complexity of a large amount of data and detected previously hidden patterns, the focus should shift to “what do we do about it?” In other words, who should we target, with what intervention and when? .” This is where colleges tend to get bogged down, says Sengupta; “their IT experts are looking for the perfect algorithm, rather than focusing on how best to modify their practices to take advantage of what machine learning has given them in terms of predictions and recommendations.”
For example, a medium-sized private university wanted to increase the percentage of applicants who would eventually enroll in the university. He spent thousands of dollars buying lists of prospective students and spent hundreds of hours calling students on those lists. But the end result was disappointing – less than 10% of applicants officially enrolled in the university.
Instead of carpet-bombing every name on the list, Aible was able to generate a model that guided the university toward much more precise targeting of students. It identified a subset of applicants who — based on their demographic characteristics, income level, and family history of college attendance — were most likely to respond to timely phone calls from faculty. It also identified the amount of financial aid it would take to influence their enrollment decision.
He then advised the university to make personal calls to these students with the tailored financial aid offers. The time this intervention took – from identifying and collecting the relevant data, to developing the algorithm and recommending the intervention strategy – was approximately three weeks. Preliminary results indicate that the university will likely see an increase of about 15% in its enrollment rate.
When Nova Southeastern University at Ft. Lauderdale, Florida wanted to use its data to improve undergraduate retention. She used an Aible solution to identify students most likely to leave. This has helped the university’s Center for Academic and Student Success target and prioritize its retention efforts for students most at risk.
While most retention efforts are reactionary—activated only after finding a warning sign that a student is in academic jeopardy—an effective AI strategy should help a college target curricular changes, step up guidance, and offer support services much earlier, before a student begins to experience trouble.
One thing I discovered while researching this article is that colleges are often reluctant to acknowledge that they are using AI for purposes like these, insisting on remaining anonymous in press reports. This concern came as no surprise to Sengupta who believes it is due to the belief that the use of AI increases the risk of someone’s privacy being breached.
One way to ensure that the privacy of individuals is protected is to keep all data on university servers rather than on a vendor’s servers. Another is not to use information about groups of less than 25 people so that individual information cannot be inferred.
Hernan Londono, senior strategist for higher education at Dell Technologies, believes privacy concerns aren’t the only reason universities might be reluctant to use AI and reluctant to admit it when they do. “AI-based interventions can be biased because various student populations can be differentially excluded from the data,” he told me.
Not only does AI reflect human biases, but it can amplify them by providing non-representative data to algorithms, which are then used to make important decisions. For example, Amazon stopped using a hiring algorithm after it discovered it favored candidates based on words like “executed” or “captured” that were more common on men’s resumes than women’s.
As significant as concerns about privacy and bias may be, colleges will inevitably increase their use of AI. It’s too powerful a tool to sit on the shelf of higher education. Its applications will continue to grow, and with proper controls and precautions, it can be used to improve college performance and promote student success at the same time.
#Colleges #Artificial #Intelligence #Improve #Enrollment #Retention