Skip to main content

Case Summaries

Back to Case Summaries

AI and academic misconduct - CS072506


An MBA student complained about the feedback they received for their dissertation, saying that it was too generic to be useful. They said that an AI detection tool showed that the feedback had been partly AI generated. They asked that their work be reconsidered, and more detailed feedback provided.

The provider rejected the complaint. It said that AI had not been used, adding that feedback was provided against a standardised marking scheme and that markers used a set template to ensure consistency of approach. The student asked for the complaint to be reconsidered on the basis that the provider had not properly considered their complaint and that its conclusions were unreasonable. The provider rejected this request. It commented that AI detection tools did not provide conclusive and reliable results. The provider said there was not any noticeable change in the tone or style between sections of the feedback that were purportedly AI generated and sections that the tool had identified as being written by a person.

The student complained to us. We did not uphold the student’s complaint (we decided it was Not Justified).

We decided that it was reasonable for the provider to place more weight on information from the staff member about how they had marked the work, and to place less weight on the results of the AI detection tool. AI detection tools can be unreliable. The student’s complaint about the quality of the feedback was in this instance a challenge to the marker’s academic judgment, which we can’t consider under our Rules. We concluded that the provider had followed its marking and moderation procedures, which included an external examiner’s report confirming the quality of the assessment and that the work was double-marked.