Doing The Math
The good old The Free Press (TFP) featured another story this week that made me despair for the future. I know, I know…..
There exists something that mere mortals like you and I have no way of understanding: AMC 12, the first in a series of math contests that lead to the International Mathematical Olympiad. This is not like the Math part of the SAT or some other test. This is a grueling series of tests taken only by the most mathematically gifted of high school students. The website for this test says the following:
Founded in 1950, the AMC is the preeminent math competition for students K-12. Today, over 300,000 students in 50 states and over 30 countries take the AMC to bolster their confidence and passion for math.
The AMC 12 refers specifically to the test taken by students in grade 12 – seniors in high school. This is a 75-minute test, with 25 multiple-choice questions. Those questions are arranged in rough order of difficulty; the last few are impossible for all but a handful of top students. For many years, the score needed to make the distinguished honor roll – meaning the top 1% of scorers – hovered mostly in the 120s, with a perfect score being 150. This is one hard test, folks.
[Note – I am good at math. It was this recognition in university that got me into economics, which in its modern form uses a lot of fairly sophisticated math. But ‘good at math’ does not get you a score of 130 on the AMC. That requires being truly gifted at what is an unusual activity: much like being a violin virtuoso, I suspect.]
Then, in 2023, when stolen (or bought) exams started to appear online before the test, the score needed to make the distinguished honor roll went from the 120s to the 130s.
This past year, something even bigger changed – getting into the top 1% required a perfect score of 150, as there were some 300 of those recorded.
What happened? Two things, apparently. Once again, some folks got hold of the test questions ahead of time. On top of that, they put the questions into an AI bot to be solved. I here quote from the TFP article:
What is happening in the world of competitive math is a warning for the entire education system. A combination of technologies has turned cheating into an industry. In the past, tests were stolen by unscrupulous proctors and sold online. Now with AI tools, if an exam is stolen, it can be instantly solved—and sold along with the answers. Or the cheating can be done in real time, right from the testing room, with cheaters discreetly photographing questions and sending them right to a chatbot.
Now, it seems to me that last thing can be easily stopped by not allowing any electronic devices into the testing rooms, and I have to wonder why that has in fact not already been the rule, if indeed it has not.
And, to be sure, the first method of cheating can be stopped if the exams are not allowed to be seen by anyone until the moment the test is to be taken. Rather more difficult to guarantee, I suspect, for a test that is taken in a thousand different places.
I quote again from the article:
The MAA declined to answer questions for this story; a spokesperson noted that the MAA “continuously reviews and strengthens its practices.”
The MAA – Mathematical Association of America – is the group that formulates the MA12 test. This test then is used – in part – to invite students to participate in the American Invitational Mathematics Examination (AIME), a selective test open to top high school math students. This in turn leads to the international math Olympiad mentioned above.
I have a hard time deciding which part of this story depresses me more. The fact that proctors of this exam, given the question papers so they can administer the test, apparently make them available online for a price ahead of time, where test-takers buy them, or the fact that now they can make them available with the chatbot-generated answers – and no doubt charge much more.
An AI enthusiast, of which there are multitudes, might say about that second thing: who cares about finding top-flight math students if AI can solve all the problems anyway?
Yea….who cares?
After all, according to the TFP article:
Last summer, ChatGPT-maker OpenAI said that its latest tool could solve 94.6 percent of [the higher-level] AIME problems; now the best AI tools can usually solve the entire exam.
One last quote from the TFP article:
An admissions director of a highly selective math program for high school students told me, “Ironically, now when a student reports a high AMC score in their application to us, it hurts them in our admission process. We become more suspicious of them.”
“Sadly, the score distribution suggests that among the top scorers this year, there are more cheaters than honest people,” the admissions director added.
Yea, but c’mon – who needs honest people?
Epilogue:
After writing this post, I went back to the TFP article to read the comments from readers that appeared under it. This one expresses a view of AI that I mentioned above, and ends with a real kicker….
The important question here is, why would students want to devote time and energy to learning to solve these problems when the “AI” (more exactly, massive-scale plagiarism) programs are ubiquitous and effective? I’m a part-time college prof (retirement job) and this is something we’re all dealing with; academics seem more pointless than ever right now.
If he means academics as people, regular readers will know I find many, if not most, of them pointless already. But if by academics he means the study of the universe by curious people…god help us.