When we talk about how artificial threatens to impact jobs, we’re usually talking about how machine learning threatens to impact jobs. As the ‘hottest’ subfield of AI going, i.e. the one receiving the lion’s share of the research money and commercial investment, it’s pretty crucial to understand how, specifically, it’s going to roll out in offices and workplaces. Which jobs, and which tasks within those jobs, it stands to automate. Yet while a number of studies have endeavoured to examine the impact of automation writ large on the employment picture, fewer have homed in on machine learning specifically.
And if anyone’s well-equipped to do so, it’s Dr. Tom Mitchell. As the first Chair of the first-ever Machine Learning Department at a major university, at Carnegie Mellon, and an accomplished researcher in the field, he’s uniquely qualified to judge how machine learning will creep into our work. I might as well mention that he literally wrote the textbook on machine learning, too. It’s called, wait for it, Machine Learning.
In 2017, along with fellow business automation scholar Erik Brynjolfsson, Mitchell published a study in Science that detailed the raft of impacts machine learning was likely to have on different kinds of jobs. The study aimed to evaluate “the potential for applying machine learning to tasks to the 2,069 work activities, 18,156 tasks, and 964 occupations in the O*NET database.” (O*NET, if you’re unfamiliar, is a catalogue of the world’s occupations.) It did so by applying a rubric of 21 tasks the researchers determined to be exceptionally machine-learnable.
They argue that machine learning is now unquestionably a “general purpose technology,” and as such the study attempts to break down where it would affect or replace specific kinds of work—where, as they wrote, jobs were suitable for machine learning, or SML. It gets pretty specific: Does a job require mapping well-defined inputs to similarly well-defined outputs? I.e., does it involve captioning images in a textbook, or correctly labelling medical records? If so, machine learning will probably automate that part of the job.
“The first thing we found,” Mitchell tells me in an interview, “is that many, many jobs, the majority of jobs are going to be affected by machine learning.” He pauses, goes on: “The next thing we found was that very few of those jobs will be completely automated. Instead, the predominant thing that you see is that most jobs will be affected because the bundle of tasks that make up that job—some of those tasks that are amenable to machine learning, semi-automation or automation.”
Last year, Mitchell, Brynjolfsson, and Daniel Rock, a researcher at the MIT Initiative on the Digital Economy, published another paper further refining his analysis, adding two more items to the rubric to help evaluate the sort of tasks that comprise jobs, and then determining how SML each profession happens to be in total by the number of tasks machine learning stands to replace in the coming days. Jobs like massage therapist turned out to have the lowest SML index, while concierge scored the highest—the largest number of tasks within that job’s purview stand to be affected by machine learned automation.
Both studies conclude that unlike, say, industrial automation, where a robotic factory arm is apt to replace entirely an erstwhile position on the assembly line, machine learning is poised to only eliminate parts of jobs, or some of the tasks typically associated with them.
“What we think is likely to happen,” Mitchell says, “is that we won’t see wholesale elimination of most jobs, but what we will see the majority of jobs being impacted in a way that results in jobs being recombined in a way that changes the distribution of tasks.”
Mitchell and Brynjollfson’s work asserts that jobs will need to be “redesigned”—the collection of tasks that makes them up rebundled and reorganised. “Many job description are going to change, in terms of the distribution of tasks associated with those jobs,” he says. “I project that future doctors in coming decades will get more help from computers than they did before in making diagnoses, but not in applying the therapies they use.” The need for secretaries to do certain kinds of clerical work will disappear, but interfacing with clients may become more of a priority.
“Human-to-human communication seems like the kind of tasks that will not be suitable to machine learning,” Mitchell says.
All told, this is interesting and important work, as it catalogues the breadth of impacts on work on a nuanced, task-by-task level. Yet I can’t shake the feeling that it’s overly optimistic in its conclusions and recommendations. Where Mitchell and his coauthors see opportunities for “rebundling,” I see opportunities for job degradation and wage exploitation.
Certainly, doctors—a very well paid, highly skilled profession—will be insulated from machine learning, until, say, robotic surgeons become so advanced they can perform operations. Which is to say, maybe never. But, to use Mitchell’s example, if a secretary or assistant isn’t needed to schedule meetings, keep the books, file expense reports, etc—all things that machine learning is slated to automate—will many organisations see fit to keep them all employed on the grounds of human-to-human communication?
Maybe, maybe not. And I’m not saying the world absolutely needs all its secretaries or tons of clerical workers, just that machine learning-enabled automation may erode those jobs to the point where it is easier to fill the slot with lower-compensated part-time work or do without the worker entirely—which would cause a significant disruption in the current outlay of the employment landscape.
Another example we talked about was truck driving: “In truck driving,” Mitchell said, “there’s driving the truck on highways, pulling it off the road, getting the truck loaded and unloaded. And there’s a collection of tasks where you might get to the point where the long-distance driving of the truck is automated, but getting it loaded is much harder to automate.”
It’s another case where, from where I’m sitting, employers may (eventually) simply add the task of unloading the truck to a warehouse worker’s bundle, and eliminate the long haul job. Many lower skilled jobs could similarly be combined or parcelled out into gig work. As a rule, I tend to feel like the “human” component often described as being irreplaceable by automation consultants and economists is overplayed—Amazon saying that cashiers will become greeters, for instance—and will at the very least be ripe for elimination or degradation to part-time status in the event of lean times or falling profits. We’re already seeing that happen—and workers pushing back—in the service sector, where automation is taking root.
When I asked Mitchell about that prospect, he said it was an interesting problem, but he was optimistic that government could help incentivise better rebundling of tasks.
“Once you get into the mode of thinking that jobs are likely to be redefined in terms of task bundle, because that will be the optimal thing to require us to be doing, then you can think about the incentives you want to put in place to encourage certain kinds of training, how to improve existing jobs,” he said. “Rebundling the job can sometimes make it more attractive, too.”
So—how worried should you be that your job is going to be learned by a machine, and bundled up and repackaged?
Mitchell and Brynjolffson’s paper offers eight top guidelines that come in handy. (All 21 can get a bit wonky, so these are the ones they shared, in greater detail, in the Science paper.) If this describes your job, or a task in your job, then an algorithm can probably be taught to do it.
1. Learning a function that maps well-defined inputs to well-defined outputs
Among others, these include classification (e.g., labelling images of dog breeds or labelling medical records according to the likelihood of cancer) and prediction (e.g., analysing a loan application to predict the likelihood of future default).
2. Large (digital) data sets exist or can be created containing input-output pairs
The more training examples are available, the more accurate the learning.
3. The task provides clear feedback with clearly definable goals and metrics
ML works well when we can clearly describe the goals, even if we cannot necessarily define the best process for achieving those goals.
4. No long chains of logic or reasoning that depend on diverse background knowledge or common sense
ML systems are very strong at learning empirical associations in data but are less effective when the task requires long chains of reasoning or complex planning that rely on common sense or background knowledge unknown to the computer. Ng’s “one-second rule” suggests that ML will do well on video games that require quick reaction and provide instantaneous feedback but less well on games where choosing the optimal action depends on remembering previous events distant in time and on unknown background knowledge about the world.
5. No need for detailed explanation of how the decision was made
Large neural nets learn to make decisions by subtly adjusting up to hundreds of millions of numerical weights that interconnect their artificial neurons. Explaining the reasoning for such decisions to humans can be difficult because [deep neural networks, often used in machine learning] often do not make use of the same intermediate abstractions that humans do. While work is under way on explainable AI systems, current systems are relatively weak in this area. For example, whereas computers can diagnose certain types of cancer or pneumonia as well as or better than expert doctors, their ability to explain why or how they came up with the diagnosis is poor when compared with human doctors. For many perceptual tasks, humans are also poor at explaining, for example, how they recognize words from the sounds they hear.
6. A tolerance for error and no need for provably correct or optimal solutions
Nearly all ML algorithms derive their solutions statistically and probabilistically. As a result, it is rarely possible to train them to 100% accuracy. Even the best speech, object recognition, and clinical diagnosis computer systems make errors (as do the best humans). Therefore, tolerance to errors of the learned system is an important criterion constraining adoption.
7. The phenomenon or function being learned should not change rapidly over time
In general, ML algorithms work well only when the distribution of future test examples is similar to the distribution of training examples… (e.g., email spam filters do a good job of keeping up with adversarial spammers, partly because the rate of acquisition of new emails is high compared to the rate at which spam changes).
8. No specialised dexterity, physical skills, or mobility required
Robots are still quite clumsy compared with humans when dealing with physical manipulation in unstructured environments and tasks. This is not so much a shortcoming of ML but instead a consequence of the state of the art in general physical mechanical manipulators for robots.
With those criteria in mind, it’s worth taking a minute to consider the ‘bundle’ of tasks your job entails, and seeing how much might be automated, how the texture of your workload stands to evolve (or devolve). The actual politics of automation are messy, and vary wildly from workplace to workplace, but Mitchel and co. are probably right—a lot of office job automation, especially, will unfold task by task.
Featured photo: Jonas Gratzer (Getty)