The traditional anti-cheating playbook is all about surveillance. To identify academic integrity breaches, proctoring platforms typically monitor noise, eye movement, mouse clicks, and more. Even sipping water can raise a flag these days!
Although it’s easy to look to surveillance-related proctoring metrics to solve today’s cheating crisis, it’s not conducive to fostering integrity in online assessments. In fact, it can have the opposite effect. By adding more hoops to jump through, test takers may become even more anxious to perform and seek out ways to game the system.

Building an integrity feedback loop that improves testing security and experience has little to do with increasing surveillance. Instead, it’s based on listening to test takers and front liners, reviewing decisions, and fixing root causes — the way one might run a quality management program.
Stop chasing cheaters and red flags. Start nurturing mutual trust by reassessing your exam integrity process.
We get it: creating fair, consistent test-taking environments is hard. Nobody wants to be the naive instructor who leaves the back door open to cheating. But running a Big Brother style exam can actually compromise your underlying objectives.
When you focus on surveillance-first proctoring, it means you have to track more test taker behaviors, leading to even more flags to process. And more flags often leads to:
More false positives, putting an administrative burden on proctors who have to weed through noisy proctoring data (even with automated setups).
Increased number of appeals, as students seek out justification for their flagged or annulled exams.
Student anxiety and distrust, especially if they endure a restrictive testing environment only to have their exam annulled.
Another problem of solely going after surveillance KPIs is that it subjects them to Goodhart’s Law. Once you start using a metric to control students, they will try to manipulate the system to hit that specific target — often at the expense of integrity.
“We’re not actually facing a cheating crisis. We’re facing a design crisis.” — Marina Detinko, Board member of OctoProctor
So if adding more surveillance metrics won’t strengthen integrity in online assessment… what will?
Often educators lament the cheating crisis without considering the role of exam and proctoring design. Addressing flaws in your academic integrity framework can be harder to do, but more rewarding in the long run.
Getting lightweight, humane feedback from your test takers should be the first step in your feedback cycle. It shows that you care and can help you troubleshoot issues with your configuration. And remember: one of the strongest factors in preventing academic misconduct is having a high-quality relationship with the instructor (as cited by 89% of students in a study by University College Dublin).
Some ways to gauge clarity signals from your students include:

In soccer, the best referees aren’t those who issue the most red and yellow cards — quite the opposite! A high number of infractions during a match is typically a sign that the referee isn’t respected and has completely lost control of the match.
Same goes with your testing environment. More flags during an exam may not mean you’re achieving academic integrity. Instead, you may be facing a badly configured system — one that not only leads to false flags, but also fails to inspire “fair play” from your test takers.
To safeguard your exam integrity, you’ll want to carefully design your review processes and decision quality. False flags happen for so many understandable, real-life scenarios. A student may face a technical issue, have an approved accommodation, or experience an unforeseen event.
To get to the bottom of false flags, you’ll want to track these non-surveillance metrics:
The final stage of the integrity feedback loop is optimizing your proctoring and exam configuration. This is where you get into the nitty-gritty of what actually improves integrity outcomes.
Clarity can go a long way toward improving integrity in online assessment, as sometimes students break the rules without intending to. Revisit your exam instructions, as well as your policies (such as allowed materials) and pre-exam checklist.
A smooth online exam experience starts with strong design. The platform UX should account for potential user problems and questions. For example:
Online proctoring systems can unfairly flag neurodivergent and disabled learners. Make sure your configurations have flexible settings to allow diverse accommodation tweaks that aren’t just an afterthought, such as:

Reduce cheating through assessment design. For example, you might allow for an open-book exam or one A4 note per student. Another tactic is to unblock certain websites, or even opt for continuous assessment throughout the course.
Handling incidents in a fair, consistent, and timely manner is essential for creating integrity. Check your flags configuration, escalation rules, admin accessibility, and reviewer training to ensure that there’s a well-defined process for the incident and appeal process — especially for high-stakes online exams.
(From here, you can restart the feedback loop for continuous improvement in your education assessment!)

Of course, tracking metrics isn’t necessarily an evil to be avoided. It’s useful to measure non-surveillance KPIs that get to the heart of clarity, decision quality, and incident optimization. (See our recommendations above!)
Additionally, it’s worth contextualizing your surveillance-related proctoring metrics to better understand your proctoring configuration. For example:
“If institutions continue to prioritize surveillance alone, they risk undermining the very foundations of openness, flexibility, and autonomy. Conversely, if proctoring is reimagined as part of a broader ecosystem of trust, equity, and ethical assessment design, it can contribute to both academic credibility and meaningful learning.” — Mncedisi Christian Maphalala, Ntombikayise Nkosi
Don’t be the Eye of Sauron that hyper-monitors your students. Instead of pursuing more surveillance, use our three-layer framework as a starting point to design your own integrity system that aligns with your institutional values — and drives more effective, equitable online assessment.
At OctoProctor, we can support you in creating integrity-driven proctoring systems and configurations that go beyond surveillance. Reach out to discuss your online assessment challenges with us. 🐙
With some context, OctoProctor can help you point out which tracking metrics should work for your exam and tailor a demo that actually fits your circumstances.
Talk to us!Integrity in online assessments means designing exams, policies, review processes, and proctoring setups so that results reflect a student’s actual performance rather than confusion, loopholes, or avoidable misconduct.
Integrity systems are the full set of policies, tools, review practices, and communication methods an institution uses to support fair assessment. In practice, that includes exam design, student guidance and agency, reviewer decision quality, accommodation workflows, and proctoring configuration.
Measuring cheating only through surveillance signals can be misleading. A stronger approach is to combine contextualized proctoring metrics with decision-quality indicators like overturn rate, inter-rater agreement, evidence sufficiency, and appeal outcomes.
Proctoring metrics are most useful when they help institutions spot design or process problems rather than mechanically count suspicious events. On their own, they do not tell you much about integrity outcomes unless they are interpreted alongside review quality, student clarity, and incident context.
You can reduce cheating through assessment design by writing clearer instructions, using question formats that are harder to outsource, allowing appropriate open-book conditions where relevant, spacing assessment across a course, and aligning rules with what the task is actually meant to measure.
Useful exam design strategies for academic integrity include open-book formats, continuous assessment, adaptive testing, clearer task instructions, question variation, customization and localization for your cohort, transparent and regular communication about integrity, and exam environments that reduce confusion without lowering standards.
Good academic integrity frameworks do not rely solely on surveillance. They balance trust, clarity, review quality, accessibility, and operational consistency to support integrity in online assessments before and after the exam session.
Continuous improvement in education assessment means using student feedback, incident reviews, appeal outcomes, and configuration data to keep refining the exam process instead of treating every flagged event as a student-only problem.