Education and publishing giant Pearson is drawing criticism after using its software to experiment on over 9,000 mathematics and computer science students across America. In a paper presented Wednesday at the American Association of Educational Research, Pearson researchers revealed that they tested the effects of encouraging messages on students that used the MyLab Programming educational software during 2017's spring semester.
Titled “Embedding Research-Inspired Innovations in EdTech: An RCT of Social-Psychological Interventions, at Scale,” the study placed 9,000 students using MyLab Programming into three groups, each receiving different messages from the software as they attempted to solve questions. Some students received “growth-mindset messages,” while others received “anchoring of effect” messages. (A third control group received no messaging at all.) The intent was to see if such messages encouraged students to solve more problems. Neither the students nor the professors were ever informed of the experiment, raising concerns of consent.
The “growth mindset messages” emphasised that learning a skill is a lengthy process, cautioning students offering wrong answers not to expect immediate success. One example: “No one is born a great programmer. Success takes hours and hours of practice.” “Anchoring of effect” messages told students how much effort is required to solve problems, such as: “Some students tried this question 26 times! Don’t worry if it takes you a few tries to get it right.”
As Education Week reports, the interventions offered seemingly no benefit to the students. Students who received no special messages attempted to solve more problems (212) than students in either the growth-mindset (174) or anchoring groups (156). The researchers emphasised this could have been due any of a variety of factors, as the software is used differently in different schools. However, educators who spoke to Education Week were understandably more alarmed by Pearson placing thousands of unwitting minors in A/B testing for its products.
“It’s concerning that forms of low-level psychological experimentation to trigger certain behaviours appears to be happening in the ed-tech sector, and students might not know those experiments are taking place,” Ben Williamson, a professor at the University of Stirling, told the publication.
In a statement to Gizmodo, Pearson emphasised that the experiment was “an effort to improve student success in higher education courseware”:
The Education Week article, “Pearson Tested ‘Social-Psychological’ Messages in Learning Software, With Mixed Results,” mischaracterises a relatively minor product update by extracting technical language from research that, when taken out of context, and paired with words like “experiment,” conveys a malicious intent.
As with any changes to a product, we evaluate the introduction of changes to determine if they require additional ethical or legal review or consultation. In this case, the introduction of feedback messages about how to improve student success, was determined to be a part of normal educational practice.
The experiment has troubling overlaps with at least one of Facebook’s many overlapping privacy cataclysms. In 2014, Facebook experimented on almost 700,000 users, changing what they saw in their news feeds and recording the impact on their moods. Users were never told their moods were the subject of a psychological experiment and critics called that experiment unethical.
“Randomised control trials like this, at scale and embedded into widely used commercial products, are a valuable approach for improving learner outcomes in a rigorous and iterative way, while also contributing to the burgeoning literature on social-psychological interventions,” Pearson’s study reads. [Edweek h/t Kate Crawford]