Education Notes bring mathematical and educational ideas forth to the CMS readership in a manner that promotes discussion of relevant topics including research, activities, issues, and noteworthy news items. Comments, suggestions, and submissions are welcome.
John McLoughlin, University of New Brunswick (johngm@unb.ca)
Kseniya Garaschuk, University of the Fraser Valley (kseniya.garaschuk@ufv.ca)
This article is first and foremost a product of the authors’ experiences, but we would like to acknowledge many colleagues in the mathematics community who have shared with us their stories, their frustrations and successes, their opinions, support and feedback.
The dramatic developments during the COVID-19 pandemic have changed many well-established teaching practices, protocols, and principles, literally overnight. In this note, we reflect on two interrelated issues that have been strongly highlighted by those changes: final exam practices and the principles of academic integrity.
The current pandemic interrupted all of our daily routines. In mid-March, most universities informed their faculty that instruction would be remote for the rest of the term. Some institutions gave their instructors a week to move courses online; others gave them a couple of days. Teaching and Learning Centres across the country have never seen such high demand for their support and assistance. Departments scheduled additional meetings, funds were allocated to support technical requirements to move teaching online, dozens of targeted professional development workshops were put together overnight and attended the next day. We received expedited introductions to Blackboard Collaborate and discussion boards, Blue Jeans and Zoom, piazza, Kaltura,… Classes stumbled into online mode, our students met each other’s pets and children, we asked “can you hear me now” too many times in the first week, but we figured it out. The last day of classes came and went. Then we needed to think about the final exams.
Having to administer remote unsupervised exams, mathematics instructors across Canada faced a difficult question: What (if anything) had to change? Most of us had given final exams, but never online ones. Should we adjust the format, the intent, the goals? Should we be concerned with students cheating using the Internet, or class notes, or each other? And if so, what should we do to prevent it?
Our recent conversations with colleagues across the country left us with the impression that for many members of our community the Spring 2020 examination period was a very stressful experience. Marking exams and deciding about final grades is the most unpleasant part of each instructor’s job in the most normal of times. This spring, many of us were hit hard by the fact that some of our students were taking advantage of unsupervised final exams to blatantly cheat. For example, we heard a story about our colleague watching on-screen how their exam questions popped up on the Chegg website shortly after the exam started. Or a story about an instructor who found a cluster of several identical answer sheets to the test with a long list of multiple-choice questions. True, there are some funny elements to those stories: one of the postings on Chegg was with the student’s name on the uploaded image; in the other incident, the students didn’t realize that the order of questions was not identical for each student.
Still, it was disappointing and really painful to realize that in our classes we have students who, for any reason, didn’t follow their teachers’ and institutions’ advice about academic integrity. We know colleagues for whom this examination period turned into a true nightmare complete with multiple academic dishonesty reports and long conversations with students who stubbornly denied obvious cases of cheating. Possibly the most disheartening story that we heard over the last several weeks was from an instructor who caught cheaters in their “Mathematics for elementary teachers” class. The class is designed for future teachers, where the instructor’s task is to facilitate learning of mathematical concepts and planning for their instruction as well as to showcase some basic pedagogical values. For us, it is totally mind-blowing that a prospective teacher would cheat on an exam and thus completely disrespect the core values of the program they signed up for (or plan to enter). While the lesson learned from this experience will likely stay with the students forever, it will also remain with their instructor.
Overall, our feeling is that the math teaching community’s trust in our students has been one of the collateral damages of COVID-19.
Before we examine how this experience will affect our future teaching, let us consider why we have encountered these problems in the first place: have our teaching practices and particularly our assessment practices contributed to this mismatch between our expectations and reality?
In our experience, most traditional first-year math final exams have a similar structure: an exam consists of a combination of skill testing, problem solving, and extension questions. Skill testing questions, like finding a derivative of a given function, are one-liners, they target a particular skill or concept. Problem solving questions, like using a linear approximation, tend to be application questions that test solving strategies and standard algorithmic approaches. Extension questions (like using the Intermediate Value Theorem and the Mean Value Theorem to establish the number of solutions of a transcendental equation in the given interval and then using Newton’s method to estimate those solutions) are intended for A students to complete as they test both the depth and breadth of a student’s knowledge. Unsurprisingly, it is mainly in the last category that we tend to see interesting, inventive, challenging questions. The rest of the test is composed of fairly standard, predictable questions.
Standard, predictable questions were exactly what made our lives difficult during the spring 2020 examination period. They were easily Google-able; WolframAlpha would provide a step-by-step solution that, if presented on an exam paper, received full marks; when asked, an “expert” from one of several popular sites would post solutions to all of those questions within 30 minutes. And voila, we were scratching our heads wondering how Alice and Bob, students who had struggled throughout the course, were able to correctly use the chain rule multiple times and obtain such a perfectly simplified expression as their identical final answers! It seems that the issue is two-fold: we both don’t trust our students to not cheat and we also don’t trust our exams to be not easily cheatable.

The inability to supervise students writing exams in remotely managed courses has become one of the biggest concerns for faculty and administration.
In response to that, in a somewhat extreme manner, the dominant sentiment at this moment appears to be to use technology to do the policing of the remote written exams. The authors of this note belong to the group of academics who are still learning about the various artificial intelligence driven proctoring platforms, like ExamSoft and Proctortrack, for example. We listen, with uneasy feelings, about the proctoring platforms that follow students’ eyeball movements as a way of preventing or reporting cheating. With our colleagues, we are discussing how we may start using a videoconferencing platform to monitor our students’ actions during examination. And we have no doubt that our institutions will find funds to license one or more proctoring platforms, to hire technicians who will manage software from our side of the table, reshuffle the current job descriptions of the education developers (maybe there will be even need for another associate director position) to make sure that we have in-house experts on distance proctoring. And let us not forget the consulting fees of the experts who will help us choose the right platform and the legal fees of the lawyers who will make sure that our policies are aligned with the relevant privacy laws. At the end, a math instructor will be assured that the probability of Alice and Bob doing all the work on their own to find a derivative is reasonably high. But that comes at a high cost as it shifts the expectation from shared trust to rigid and possibly intrusive preventative measures.
We wonder if our institutions will invest similar amounts of energy and resources in educating our students that cheating on exams is not good for them on both professional and personal levels. Yes, in our course outlines we post a paragraph about academic integrity, but do we really expect students to read it and take it seriously? Our math classes, particularly those at the entry level, are diverse in any way one can think of. Do we have a reason to believe that each of our students has had the opportunity to fully understand that the very essence of academic integrity is to be the glue that keeps us all — students, faculty and administration — on the same side? What do we really do to ensure that students adhere to the fundamental values of academic integrity: honesty, trust, fairness, respect and responsibility?
There are many reasons why students may cheat and there is a large body of research literature examining all types of factors. But what is universally true is that our reaction to academic misconduct is always punitive. We diligently include links to a university’s academic misconduct policy on our syllabi, we warn students of the consequences, and we follow the steps in the procedure to report academic misconduct. But we do not seem to educate them on why academic misconduct is not acceptable to us and why it should not be acceptable to them. In the worst possible way to teach, we list academic integrity rules without explaining them, so no wonder that some of our students don’t follow them.
Here is an example from an episode of the Teaching in Higher Ed podcast, inspired, in turn, by Cheating Lessons by James Lang, who asks faculty members a simple question: have you sped on the way to work today? We all know it’s breaking the rules, but the majority of us do it if we know we can get away from it. Moreover, the more we don’t understand or don’t disagree with the specific rule, the more incentivised we are to break it. After all, going with the flow of traffic even though it’s 20 km/h above the speed limit is “safer” and why is it a 60 km/h zone in the middle of farmlands anyways? Same goes for academic integrity: not only are the definitions and boundaries not universally self-evident, rules that come without explanations tend to acquire alternative interpretations and get broken in support of those self-provided justifications. In terms of mathematical content of our courses, we strongly advocate for explaining and for making sense of rules and laws, and this should include other course components, such as academic integrity.
It appears that when academic integrity becomes part of the culture and a norm, it is not as likely to be broken as can be seen in the case of institutions that rely on honour codes. Several large studies (most notably, Bowers’ “Student Dishonesty and its Control in Colleges” published in 1964 and subsequent recent works of McCabe and Trevino) find that there is a lot less cheating in honour system environments, albeit still far from zero. Can we instill similar values and expectations in our publicly funded institutions? Or even in the span of one course?
What worries us is that in our conversations about academic integrity, our colleagues are overwhelmingly concerned about students’ cheating without considering the possible responsibility of the larger academic community for this situation. Academic integrity is an ethical policy not specific to students. It includes students, faculty members, staff, administration and institution as a whole.
For example, do our institutions follow the spirit of our own academic integrity policies when we enroll students who are not ready to take a fast-paced topic-packed math course and then jam them into a few-hundred-student strong class? Or when they hire a large number of contract faculty and expect them to teach under the stress of their expiration date looming? Or when they put inadequately prepared teaching assistants in the position to advise or teach students or mark their assignments?
Maybe on a more personal level, have we always followed the fundamental values of academic integrity ourselves? Has it happened that a math instructor was late for their class or went to the class unprepared? Broke a promise to students or didn’t follow the institutional guidelines? Showed their own biases, cultural, or gender related, for example, when communicating with students or when evaluating students’ work? Or got in a habit of recycling old assessments year after year? Or teaching in the same way for years and putting no efforts to learn about new teaching techniques and technologies and thus re-evaluating their own teaching practices? Or towards colleagues, on hiring committees, allowing racial or gender biases to be present in decision making?
In short, our view is that by only investing into meaningful education of all involved in the learning and teaching processes may our communities move towards the full implementation of the principles of academic integrity.
In short, our view is that by only investing into meaningful education of all involved in the learning and teaching processes may our communities move towards the full implementation of the principles of academic integrity.
Back to last term’s final exams, we mainly saw two approaches to minimize student academic misconduct.
Some instructors focused on preventing students from cheating, with measures from extreme to mild. We have heard of students being asked to install two cameras so the instructor has a view of them, their desk, and their computer screen. We have heard of instructors imposing rigid time constraints and releasing questions one at a time to be submitted within a short timeframe. Some instructors purchased access to Chegg and other similar websites to monitor posted questions. Some compared students’ handwriting to previously submitted work. Some held post-exam oral sessions asking students to explain one question from the test in real time. Some asked the students to sign an integrity contract that they promised to uphold.
The authors of this note are among those instructors who tried to make their exams less cheatable. Our main idea was to include in our exams a number of conceptual problems of various levels of difficulty. Motivated by the fact that over the last several weeks, members of the general public have been forced to pay unprecedented attention to various forms of mathematics (think about the endless mentioning of “flattening the curve”, for example) we included in our exams problems that were inspired by current events. For example, a large portion of one final exam was designed around the analysis of the spread of COVID-19 and its representation in the news; a big portion of the other final was built on the publicly available data on the 2000 transmission of West Nile virus. This gave us an opportunity to state our calculus questions through only the verbal and graphical representations of the related functions. We want to emphasize here that these scenarios are not one long-answer or extension question; rather, they are broken up into parts each with a series of questions — some technical, some conceptual, some interpretive. So even our questions that aimed to test students’ procedural knowledge were presented as parts of the story. Our message to students was simple: we would much rather have you to show us that you are able to analyse (mis)information using mathematical tools than that you can recite the limit definition of a derivative.
As we recently discovered, these types of questions appear to be known in history and philosophy as “stimulus questions”: students are provided with a situation or context and are asked to demonstrate their critical thinking skills by analysing the particular scenario using the course’s core concepts. Not only are these stimulus questions targeting learning outcomes we want to assess, they also present a learning opportunity for the student to yet again hone their skills in applying mathematical thinking in an authentic setting. More importantly for this discussion, these problems are also highly non-cheatable: they will be hard for a non-student or a generic online tutor, unfamiliar with the exact course content and who generally tends to be proficient in standard techniques rather than used to analysing math in context. One drawback is the creation of these problems. These problems are time-consuming to design and they cannot be re-used once released; but it is our responsibility as instructors to put in the time and effort into developing the course, whether it is our first iteration or the 20th.
We should note that course assessments should be aligned with the spirit in which the material is taught throughout the term, so students can practice those skills and not perceive the exam as unfair or unreasonable. As we exercised broader mathematical thinking throughout the course, we were told by a number of students that word problems were better because “you know whether your answer is way off or not”. We trusted our students to buy into a “stimulus exam” and our students indeed reacted well to this format. There were parts of the exam that were best done by students who were generally weaker in procedural questions during the term. This is not terribly surprising and confirmed our belief that while procedural fluency requires a lot of practice, conceptually rich questions are actually more intuitive for students (and their instructors), particularly when presented in actual real-life settings.
The pandemic did not create new issues with how we structure our final exams, it simply highlighted what has been present in our practices for a long time. It is also important to understand the situation we find ourselves in and the difference between an emergency switch to online teaching and actual online learning. As a popular, now folklore, Tweet circulating over the last couple of weeks points out: we are not working from home, we are staying home during a pandemic trying to work. So our solutions to emerging issues with giving final exams online for Spring/Winter 2020 term were definitely ad-hoc. Moving forward, however, we find ourselves in the new reality of planning fully online courses for summer and possibly fall terms. Instead of doing damage control, we need to consider a more wholesome approach to our future practices. General recommendations from online educators suggest replacing high stakes exams with frequent smaller assessments, which not only provide continuous feedback to students, but also reduce the pressure and perceived necessity for students to cheat. Is a traditional final exam worth 40-60% of the total grade necessary? What are some other effective and efficient options for grade distribution and/or comprehensive course assessment? Let’s not dismiss this question as quixotic because of large class sizes, lack of TA support, time and other constraints. Instead, let us entertain some alternative thoughts, engage in creative thinking we so often ask of our students and come up with reasonable solutions. We are, after all, professional problem solvers.
In summary, the COVID-19 pandemics changed our everyday and professional routines. In our view, the pandemics also have underlined the urgent need that our mathematics teaching community re-think how we assess our students’ academic progress and how we educate them and ourselves to fully accept and follow the fundamental values of academic integrity: honesty, trust, fairness, respect and responsibility.
*Illustration co-created, Bethani L’Heureux, is a young Cree artist. She recently graduated from Alpha Secondary school in Burnaby, BC and eventually plans to pursue a career in voice acting or art.
Editors and authors would love to hear your responses and comments to this article. Please email us at kseniya.garaschuk@ufv.ca, vjungic@sfu.ca and johngm@unb.ca