°®Âþµº

Can That Assignment Be Completed With GenAI?

Article Icon Article
Monday, October 7, 2024
By Katharina De Vita, Gary Brown
Photo by iStock/GamePH
The AI Risk Measurement Scale helps academics gauge whether students might misuse generative AI to demonstrate learning.
  • The AI Risk Measurement Scale, known as ARMS, helps program leaders determine if an assessment is at high risk of being completed by AI. If it is, the assignment should be redesigned.
  • A year after ARMS was introduced at the University of Greenwich, cases of academic misconduct dropped by nearly 40 percent.
  • ARMS has promoted collaboration and continuous improvement among faculty as they compare approaches, share effective strategies, and adopt best practices.

 
The emergence of generative artificial intelligence (GenAI) has significantly disrupted the higher education sector. Some universities have adopted a defensive stance, tightening preventive measures while policing and prohibiting the use of GenAI tools. But other schools have chosen to embrace this new technology, and this requires them to rethink their approach to assessment.

The University of Greenwich in London stands among the latter group. To maintain academic integrity in the face of rapid technological advancements, the school has integrated a tool called the AI Risk Measure Scale (), designed to help academic staff evaluate the potential risk of students misusing GenAI to complete their assignments.

The university piloted ARMS in three programs offered by the Greenwich Business School, which subsequently adopted the tool across all 45 of its programs. Soon after, ARMS was rolled out across other faculties at the university. The result? In the 2023–24 academic year, the school’s use of the tool reduced assessment misconduct by almost 40 percent over the previous year—and university officials learned valuable lessons.

An Introduction to ARMS

Approximately 21,000 students from nearly 180 countries attend the University of Greenwich, and each student typically undergoes 12 to 20 summative assessments per year. When GenAI tools such as ChatGPT became widely available, the university proactively laid the groundwork to embrace the in teaching, learning, and assessment.

At the start of 2023, the school revised relevant policies around assessment misconduct and encouraged faculty to critically review their assessment strategies. At the same time, it developed ARMS to provide academics with a practical tool for gauging the likelihood that students would misuse GenAI to complete assignments. ARMS categorizes assignments on a five-level scale of risk, from very low to very high.

For instance, one of the pilot programs for ARMS was the BA in International Business. At the time, it featured 327 students undergoing 56 different assessments. According to the ARMS analysis, 46 percent of the program’s assignments had a very low risk of being completed by GenAI. Twenty percent were judged to be low risk, 14 percent moderate risk, and 18 percent high risk. Only 2 percent fell into the very high risk category.

ARMS provides academics with a practical tool for gauging the likelihood that students will misuse GenAI to complete assignments.

Program leaders are the ones who determine the categories, working from the assessment briefs submitted by module leaders. Program leaders are responsible for making these risk assessments because they have a holistic view of the overall program and a critical distance from each module, which allows them to be more objective. If there is some uncertainty about the risk level of an assignment, program leaders are encouraged to opt for the higher category.

External examiners assure robust quality assurance and introduce a layer of independent evaluation that reinforces the credibility of the assessment designs. These examiners share feedback, provide valuable insights into potential areas of improvement, and offer additional comments about the advancements of GenAI.

A Closer Look

Any time program leaders identify a high-risk assignment, they work with module leaders to redesign it. Revised assignments can enhance both the quality and integrity of an assessment while still allowing students to use GenAI tools responsibly.

For example, in an innovation management module, students previously were asked to discuss their understanding of Christopher Freeman’s observation that “firms must innovate or die.” The original assignment was rated at 5 (very high risk); a revised version is categorized as a 2 (low risk).

In the new assignment, students must watch a specific episode of “The Apprentice” and analyze a failed attempt to create a new product or service. Students must identify the type of innovation observed, explain its failure, and provide recommendations for better managing the innovation process. This redesign incorporates authentic learning principles, emphasizing higher-order cognitive skills that demand critical and creative thinking.

A graphic that examines how an essay assignment was updated from a generic one that could have been completed by GenAI to a more complicated assignment that requires students to draw on principles they've learned in class.

The original version of an essay assignment could easily have been completed with the help of AI, so it was categorized as very high risk. The revised version is considered low risk because it requires students to draw on lessons they have learned in class.

The new format allows students to use GenAI for writing assistance, aligning with institutional policies about GenAI tools. However, to complete the core analysis, students must rely on their understanding and application of concepts covered in the module. Students apply theoretical concepts to a realistic scenario, enhancing the practical relevance of their learning.

The redesigned assessment remains an essay and, therefore, its nature has not fundamentally changed. Such an approach to assessment redesign is particularly valuable considering that quality assurance policies often require extended implementation periods.

Support and Guidance

Along with the implementation of the ARMS system, the school began offering workshops designed to enhance staff awareness of the capabilities of GenAI tools, as well as the need for authentic assessments.

In these training sessions, participants discuss how to develop AI-resilient questions, detect AI-generated content, identify vulnerabilities in assessment design, refine assessments through improved rubrics and marking criteria, and maintain academic integrity. They also gain the skills necessary to adapt to recent developments in GenAI.

The school provides a wealth of other support, including curated resources and a repository of authentic assessment examples to showcase good practices and provide inspiration. These resources are available in a single comprehensive document that also includes detailed instructions for utilizing ARMS.

Evidence of Improvements

It has been a year since the University of Greenwich implemented the ARMS process, with largely positive results. Program leaders have said the tool is user-friendly, intuitive, and readily accessible. They also appreciate that it can provide assessment diagnostics, highlight good practices, and identify higher-risk tasks, all while allowing them to guard against the integrity challenges posed by GenAI.

ARMS has brought two other critical benefits:

It has noticeably reduced the number and nature of cases related to academic dishonesty. A year after implementing ARMS, the school analyzed the results. Overall, the number of assessment misconduct cases in the business school decreased from 1,057 in the academic year 2022–23 to 692 in 2023–24, representing a reduction of 34.53 percent in absolute numbers. Accounting for the change in student populations between those years, cases dropped by 39.34 percent.

The incidence of student misconduct decreased across all three categories and eight severity levels the university uses to classify offenses. For context, the university considers Category 1 to encompass minor, unintentional infractions. Category 2 covers more serious breaches and includes “work evidencing use of AI to demonstrate learning that has not been achieved.” Category 3 represents the most severe misconduct, including “extensive and/or repeated use of AI to demonstrate learning that has not been achieved.”

Faculty identify and share effective assessment practices, compare approaches, and collectively work toward optimizing their assessment strategies.

It has promoted collaboration and continuous improvement. As faculty began using the tool, they developed a common language around assessment risks, and they began discussing their assignment ratings with each other. Now, faculty identify and share effective assessment practices, compare approaches, and collectively work toward optimizing their assessment strategies.

In the same collaborative spirit, program leaders share examples of assignments that are categorized as low risk. These examples provide inspiration for other faculty, ensure that best practices are adopted broadly throughout the curriculum, and spark conversations about best practices.

“ARMS has been an opportunity to look in-depth at our modules and discuss state-of-the-art assignments with module leaders,” says Stefano Ghinoi, former program leader of the BA in International Business. “Colleagues have seen it as a means for opening dialogues and critically evaluating and revising their practices. Importantly, ARMS has encouraged conversations about innovative approaches, prompting critical examination of current practices and facilitating the adoption of new assessment methods.”

“When the team first used ARMS, I think it’s fair to say that we were a little defensive, as we didn’t like the idea that the way in which we had designed our assessments might elicit cheating,” observes Samantha Chaperon, program leader of the Master’s in International Events Management. However, once team members had a better understanding of AI capabilities, they were eager to discuss creative ways to revise higher-risk assignments. Since using ARMS, she added, team members have gained confidence in using GenAI and “developed some excellent authentic assessments, which are much less open to cheating.”

Future-Oriented Education

We put forward these four recommendations for schools implementing ARMS or their own risk assessment tools:

  1. Implement a collaborative assessment design and evaluation process that brings together module and program leaders. Module leaders tend to assign lower risk ratings to their assignments, possibly because they are attached to their own assessment briefs. When program leaders are involved in the categorization process, there is less potential bias and more balanced risk assessment. If it is impractical to engage the entire teaching team, ask program leaders to conduct the ARMS evaluation.
  2. Recognize the challenges of multipart assessments. When a single assessment—such as a portfolio—contains multiple components, each element might present a different risk level. For such assignments, program leaders should evaluate and weight each component, then calculate the true overall risk.
  3. Create a centralized resource hub of guidance and good practice examples to aid educators in designing effective assessments. The hub could feature a curated collection of authentic assessments categorized by discipline.
  4. Offer comprehensive AI training programs. Module leaders and program leaders still have varying levels of understanding about AI capabilities. To ensure all educators have a consistent and up-to-date understanding of AI technologies and their implications for education, the school should supply ongoing support, guidance, and practical examples of AI-resilient assessments.

For the University of Greenwich, the implementation of ARMS has fostered a reflective academic community working together to navigate the challenges and opportunities presented by GenAI. By utilizing a tool that both embraces innovative technologies and protects academic integrity, school officials aim to provide students with a comprehensive and future-oriented education.

To get access to the ARMS template and guidance document, please complete this short .

What did you think of this content?
Thank you for your input!
Your feedback helps us create better content.
Your feedback helps us create better content.
Authors
Katharina De Vita
Faculty Head of Student Outcomes, Manchester Metropolitan University
Gary Brown
Associate Dean for Student Success, Greenwich Business School, University of Greenwich
The views expressed by contributors to °®Âþµº Insights do not represent an official position of °®Âþµº, unless clearly stated.
Subscribe to LINK, °®Âþµº's weekly newsletter!
°®Âþµº LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for °®Âþµº's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to °®Âþµº LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.