Pre-pandemic versus post-pandemic: how do the students actually compare?

PDF icon
Education Notes
September 2024 TOC icon
Education Notes
September 2024 (Vol. 56, No. 4)

Education Notes bring mathematical and educational ideas forth to the CMS readership in a manner that promotes discussion of relevant topics including research, activities, issues, and noteworthy news items. Comments, suggestions, and submissions are welcome.

John Grant McLoughlin, University of New Brunswick (johngm@unb.ca)
Kseniya Garaschuk, University of the Fraser Valley (kseniya.garaschuk@ufv.ca)

Covid-19 happened. Emergency and non-emergency online teaching ensued. But in fall 2021, we were allegedly “back to normal.” But is there “back” in our new “normal”? Our current  environment  is nothing like the old: as a community, we are more comfortable with online tools such as Desmos, Padlet, virtual whiteboards; we make better use of learning management systems; we regularly hold virtual and in-person meetings and office hours; for better or for worse, our institutions now assume we can easily switch modes of teaching when necessary (i.e. when a snow day brings all in-person classes into a virtual environment). In 2020, we reconsidered our assessment strategies and thought deeply about what we wanted our students to learn once open book exams and online supports became part of the experience. In fall 2022 though, pandemic restrictions subsided  and many instructors returned to fully pre-pandemic methods even though our students did not. I can lament over lost opportunities and forgotten lessons that we learned, but my intent with this piece is to bring light to another issue I’ve observed.

In 2021 and 2022, students coming to university were coming from the pandemic-induced online high-school experience. Their educational experience was disrupted by inconsistent teaching quality, lack of high-quality resources, reduced peer collaboration, decreased access to various supports, mental health struggles, lower motivation and many more. As a result, students transitioning to university in those years had gaps in fundamental precalculus knowledge. We have all heard this argument, haven’t we? While I agree that many new, unexpected and unprecedented factors affected student experience, I am not willing to accept the conclusion that those students have weaker procedural skills without seeing the data. In all honesty, I am annoyed at the fact that I hear so many of my colleagues recite the above argument and its alleged conclusion with no concrete evidentiary support — anecdotal observations do not count. We are mathematicians and when we pose a conjecture, we look for a proof or a counterexample. Let us not settle for confirmation-biased this-new-generation-is-not-the-same-as-the-one-before-them conclusions. I do not personally care about which way the conclusion goes, but I need it to be coming out of data and not thin air. And we certainly have the data.

At the University of the Fraser Valley, we run a Calculus Readiness Test (CRT), a voluntary diagnostic test given to students in the first week of classes. The test is designed to reveal students’ weakest areas of precalculus, while the custom Precalculus Review Package (PReP) is intended  to support students in their effort to review those areas. I developed CRT when I first started at UFV in 2016, using my previous experience with this type of assessment at the University of British Columbia. The test went through three iterations of student interviews and adjustments before we settled on the final version. Over the last seven years, PReP has undergone various levels of improvements and is currently supported by comprehensive videos of worked precalculus examples. Instructors can use the resource in their calculus courses for recommending just-in-time review in support of various topics. In the meantime, CRT has stayed the same – and while is due for a refresh, it has provided me with a very useful time capsule comparison between pre-pandemic and post- pandemic student cohorts.

Before we discuss  the results, let’s talk about the context and format of the diagnostic. At UFV, as at many post-secondary institutions, first year calculus courses have broad prerequisites, on the order of “B or better in Precalculus 12.” As such, the courses attract diverse audiences and incoming CRT scores tend to be quite varied. The current version of CRT consists of 19 multiple choice questions to be completed in 50 minutes. It is a self-assessment and hence is not invigilated, but it is also normally not worth any grades (I award 1% for completion and so do some other instructors). The test is scored based on five sections: Algebra, Equations/Inequalities, Graphs, Functions, Trigonometry. Each question on the test receives a score in one or two of the sections. After the test, students receive a score for each section and are then invited to practice their precalculus background skills using PReP, which is organized into the same five sections.

Here is the first result: below are the average CRT percentage scores per category, in both graphical and numerical format, of students taking the assessment in 2018 and in 2022.

With the test maximum score of 25, I excluded the students who scored 6 or less as closer inspection of those scores reveals that many of the students scoring very low tend to only attempt the first few questions before they close the test. With this cut-off, my dataset consisted of 153 students out of 302 in 2022 (50.6%) and 168 students out of 286 in 2018 (58.7%).

As the data shows, while the students performed essentially the same on three of the categories, the 2022 cohort did significantly better in the categories of Algebra and Equations/Inequalities. Going to the question-level, we see that the 2022 cohort did better on 10 questions and worse on eight. However, where they did better, they did significantly better. Large differences occurred in four of the questions with over 12% increase in average and with as much as 25% increase for the following question:

Given the equation 7 = (2y+x)/xy, solve for x.

The only question on which the 2022 cohort performed significantly worse was the following qualitative  question from the Graphs category, with 22% drop (other drops were at most 5%):

Choose the graph of distance versus time that best fits the following situation: when jogging, I start off slowly, build up to a comfortable speed and then gradually slow down as I near the end of my run.

According to this data, our post-pandemic students may not be that bad after all. In fact, when it comes to fundamental precalculus skills, on average they are the same or better than pre-pandemic students. Let’s sit with that for a minute while I write about something else.

Hannah Fry recently talked about some data from a study that OkCupid dating website ran about a decade ago. I recommend the video, but here is also a summary of the original findings that appear in the book “Dataclysm” by OkCupid co-founder Christian Rudder. The website collected data on age attraction: heterosexual individuals (whose age was recorded) were asked the age at which they find the opposite sex most attractive. The age attraction for female responders scaled with age as women tended to cite the age of attractive men close to their own age. The situation was different for male responders whose answers, regardless of their own age, hovered around early 20s.

What is your first explanation for these findings?

Don’t believe everything you first think. Any data is complex, especially human self-reported data. There are a number of ways to look at the graphs above (and I am looking forward to seeing how my students will explain these plots). Here are some possible interpretations. Men never update their model: what they found attractive in their 20s, they find attractive in their 60s — regardless of their age, appearances, experiences, men’s attraction preferences lie in some fixed point in the past likely when they first formed those thoughts. Another way to view the data is to recognize that the question could be interpreted differently by different groups who either contextualized the requested age as the age of a potential partner or the age of an ideal magazine model. Biological underpinnings of reproductive struggles, social representations of attractive humans, biases in the data being collected from a narrow set of responders, our own biases of past findings and even disappointments – all of these factors affect how we can perceive and then report the results. So don’t believe everything you think.

Much like with OkCupid data, the CRT data presented above can be explained and interpreted in different ways. We could say that maybe post-pandemic students are better at multiple-choice questions and online assessments having had more practice with them. We could say that answer patterns indicate that post-pandemic students are better at procedural questions and worse at qualitative questions justifying it with the fact that pre-pandemic students had more exposure to collaborative peer work that develops interpretive and communication skills. We could find ways to interpret this data to look better or worse for either cohort. We could look for confirmation of whatever we want the data to say. But the fact remains that the average performances of both 2018 and 2022 cohorts look very much alike. The post-pandemic students are not fundamentally worse.

We need to allow stories to change. Pre-pandemic education wasn’t better or worse – it was different. Online education isn’t better or worse – it is different. Our students aren’t better or worse prepared – they are prepared differently. Assessment practices, use of AI, accessibility of education – it has all changed and we need to accept the change before we can adapt to it. Let us be open to the diversity of student experiences and strive to build an inclusive educational environment. Let us look at the data and have honest conversations about our teaching practices and perceptions. Let us not be stuck in the glorious past because our students are coming from an unprecedented present.

Email the author: Kseniya.Garaschuk@ufv.ca
social sharing icon
PDF icon
printer icon