A version of this story original appeared in TechCrunch’s weekly robotics newsletter, Actuator. Subscribe here.
A big and often unremarked upon aspect of being a reporter is knowing your audience. It’s not always as straightforward as it sounds — particularly when writing about tech. You’re always walking that tightrope between over- and under-explaining. Assuming too much knowledge makes text impenetrable for the non-expert, but getting caught up the finer details is recipe for condescension.
On Friday, I asked LinkedIn to air their annoyances about mainstream robotics coverage (i.e., big publications that don’t specialize in the topic or even technology more broadly). For me, the headline “The Robots Are Coming” has been a minor source of annoyance that seems to crop up at least once a week.
Other people’s responses are more or less what I was anticipating: robopocalypse/killer robots, a lack of historical context, too much focus on gimmicks and flashy form factors like humanoid robots. That’s all fair and certainly feedback I will apply to my own work going forward. “Robopocalypse” is a term I dropped from my vocab a while back, aside from references to the internet’s knee-jerk reaction to any new robot.
Another thing that cropped up in people’s complaints is the job conversation. As with robopocalypse headlines, I absolutely agree that things trend toward the sensationalistic. The “Robots Are Coming” is often amended to include “For Your Job.” It runs parallel to the “AI is taking your job” talking point. As a general rule, the AI conversation focuses on white-collar jobs and the robots on blue. It’s not one to one, but that’s largely how these things go: a robot in the factory, an AI in the office.
Sensationalism isn’t just a robotics thing. It’s an online journalism thing. My industry has been dying for longer than I’ve been a part of it (which is, itself, quite a long time). There are days when it feels like we’re all fighting for the same scraps of attention, hoping people can look up from TikTok long enough to skim a news article. When you’re vying for ever-shortening attention spans along with every other piece of instantly accessible information, you think a lot about framing.
Such blunt force not only does a disservice to the robotics industry, but it also drains all subtlety from what needs to be a truly nuanced conversation. I’m sure there are those who would rather skip the jobs conversation altogether, but I firmly believe that approach is equally problematic.
So let’s start from a point I think we can all agree on: Robots have and will continue to impact jobs. The presence of robots in the workforce is growing at a rapid rate. The more prevalent and sophisticated automation becomes, the greater impact it will have on the way we work.
I very intentionally chose “impact” as a neutral term. From a purely semantic standpoint, it’s neither inherently negative nor positive. The workforce of the future will be different, and robotics will almost certainly be a primary driver of that change.
I’ve attempted to take a nuanced approach to the jobs question in the pages of TechCrunch. Ultimately, it’s up to you to figure out whether I’ve succeeded on that front. A vast majority of people I speak to believe the impact will be positive — that the robots will either replace bad jobs or at the very least make them better. There’s plenty of truth in these statements, but I try to remain conscious of the fact that most of the people I speak to about robots are either roboticists or investors — roles that require a general sense of bullishness.
I don’t believe my role is devil’s advocate, but I do feel a sense of responsibility to remind readers that jobs aren’t just numbers. There’s a human behind each of them. In a position that requires me to frequently write stories about layoffs in the tens of thousands, it’s very easy to lose sight of that fact. I’ve certainly been guilty of leaning into the abstraction. This is why, for example, I frequently post job listings in Actuator. For a vast majority of us, our survival hinges on our ability to work. That’s just how the world operates.
It’s important to have conversations about automation’s long-term impact. It’s debate that will continue to rage on into the foreseeable future, and I’m glad any time people are discussing it with all of the context and nuance required. I do, however, believe that we often discuss it at the expense of short-term impact — that is, those jobs that are immediately affected. This is where the controversial and less controversial topics of safety nets and upskilling come in. Those are topics we’ll have to dive into some other day.
We are not, however, avoiding controversy outright this week. In fact, in some circles the topic du jour is even more radioactive than either of the above — the robot tax. It’s also something we’ve not discussed much in Actuator, so it felt like time. Given the nature of this newsletter, what follows is going to be far from the be-all and end-all on the subject, but it’s a good opportunity to address something that has been in the ether for a long time.
Brookings described the concept thusly:
The basic idea behind a robot tax is that firms pay a tax when they replace a human worker with a robot. Such a tax would in theory have two main purposes. First, it would disincentivize firms from replacing workers with robots, thereby maintaining human employment. Second, if the replacement were made anyway, a robot tax would generate revenues for the government that would cover the loss of revenue from payroll taxes.
The Institute’s views on the topic notwithstanding, I think that largely covers the idea in broad strokes, though I would add to it. When I consider the concept, the “loss of revenue from payroll taxes” is secondary to the more pressing issue of the potential human toll.
Way back in 2017, we ran a column by Steve Cousins that concluded with:
Getting companies to pay their fair share of taxes won’t solve the larger societal challenge that automation will eventually displace low-skilled workers, nor would a robot tax. Instead, governments should focus on using corporate tax revenues to create free or low-cost education programs to prepare people to work alongside automation.
For those unable to find work in tomorrow’s tech-driven society, governments could provide universal basic income or other safety nets for the least-advantaged.
To which I say, these concepts are far from mutually exclusive. In fact, from where I sit, funding a social safety net is perhaps the strongest argument in favor of a robot tax. The following statement is the most political I’m going to get in today’s newsletter. Ready? Okay. I believe that feeding and housing those without means should be regarded as an essential function of government. So pairing these two concepts seems logical.
That said, I am neither advocating for or against a robot tax. Honestly, I’m currently riding the fence on the subject. There are valid points on either side. Having discussed some of the pros above, I would say the primary argument against is concern over stifling innovation. At its heart, it’s the same basic argument against any manner of business tax, though with the robot tax, I would suggest that slowing innovation is kind of, sort of the point.
The question ultimately, I think, comes down to what’s more important — maintaining workplace status quo in an effort to keep more people employed or maintaining U.S. competitiveness? Again, I’m not operating under any illusion that you’re going to find the answers in this week’s robot newsletter. If I get more people thinking about the topic, however, I’ll consider it a job well done.
Hopefully at some point in the near future, I’ll have the time and bandwidth to do a deeper dive on the topic. For this week, however, I’m leaning heavily on a study out of MIT published late last year.
Published in the Review of Economic Studies, “Robots, Trade, and Luddism: A Sufficient Statistic Approach to Optimal Technology Regulation” seeks to a provide “general theory of optimal technology regulation.” The MIT economists behind the study — Arnaud Costinot and Iván Werning — ultimately settle on a sweet spot that includes modest taxation.
“Our finding suggests that taxes on either robots or imported goods should be pretty small,” Costinot told MIT at the time. “Although robots have an effect on income inequality . . . they still lead to optimal taxes that are modest.”
Prominent figures, including Bill Gates and Bernie Sanders, have called for some form of taxation over the years. In 2017, Gates told Quartz, “You ought to be willing to raise the tax level and even slow down the speed.” He cited, among other things, a broad, simultaneous displacement of jobs across a spectrum of industries.
Asked on CBS Sunday Morning about Gates’ position on the subject, Sander answered, “That’s one way to do it. Absolutely.” His broader take on automation is exactly what you’d expect from the Vermont senator: “So if we can reduce the workweek, is that a bad thing? It’s a good thing. But I don’t want to see the people on top simply be the only beneficiaries of this revolution in technology.”
For a counterargument, we go back to Brookings, which highlights the aforementioned potential for automation to create more jobs in the long run:
“[T]he existing research suggests that firms adopting robots actually experience an increase in employment, undercutting a main argument in favor of a robot tax,” writes senior fellow Robert Seamans. “In addition, a robot tax would necessitate a definition of what comprises a robot. Settling on an appropriate definition will not be easy. Instead, policymakers should consider other policy changes to help workers, potentially including changing how capital and labor are taxed, but also focusing more broadly on labor market reforms.”
To date, only South Korea has come close to passing legislation, though that country’s approach is reducing tax credits by two percentage points, rather than introducing an altogether new tax.
To understand their research a bit better, I conducted an email interview with Costinot and Werning.
TC: “Robots, Trade, and Luddism” was published late last year. Have any more recent developments impacted your findings?
AC/IW: Since we wrote the paper, there have been huge advances and concerns about AI technologies. The results of our paper can be applied to this technology.
We provide a general formula that takes as input the impact of technology on the distribution of wages. This important input is not known for AI, and there is much ongoing work and speculation.
When discussing “redistribution,” is the idea that the taxes collected will directly benefit those whose jobs have been displaced by automation?
The main point is not the revenue from the robot tax, as much as the fact that the tax will shape demand for labor and thus wages and jobs. In particular, the potential wages people can earn may become more unequal with new technologies and the idea is that the tax can mitigate these effects. In a sense, one can think of this as pre-distribution, affecting earnings before taxes, instead of redistribution.
I’ve seen very mixed reactions with regard to the efficacy of “upskilling.” What is your sense on such campaigns when it comes to displaced blue-collar roles?
We have not studied this in detail. At a general level, the same forces are at play: Skill acquisition can be approached with an analysis similar to ours, and it represents the other side of the coin. If training can improve the distribution of skills, there is a force for subsidizing it. However, we have not surveyed the empirical literature on its efficacy or studied this question in detail.
You suggest that 1% to 3.7% on value is the sweet spot for taxing these systems. What begins to change above that threshold?
Yes, to be perfectly clear, this is what our formulas deliver given the available tentative evidence. But the impact on the wage distribution from automation is a key input for which there is much uncertainty.
To your question: At the optimum, you are trading off improving the pre-tax wage distribution with the efficiency losses of the tax, reaching a sweet spot. If the tax is too high, you have gone too far along this trade-off and the efficiency losses have started to be more important. A key element in evaluating this trade-off is whether you have other tools to redistribute: If you do not, then you may want higher taxes. However, in our benchmark, we allow for a nonlinear income tax as is available in the U.S. and advanced countries. In our calibration, in line with the literature, this income tax turns out to be relatively efficacious, explaining why we find a relatively low tax rate.
We didn’t come into this expecting this, and the relatively low number did surprise us. But the theory and the evidence pointed us there.
Does the implementation of a robot tax risk stifling innovation/competition? Is it viewed as an obstacle to increasing domestic manufacturing?
Yes, it would have both effects in principle, unless they are counterbalanced with other policies. In general, you can think of these as some of the efficiency losses [that] are part of the trade-off we considered, as discussed above, and the reason the tax is not found to be higher.
Professor Werning told MIT, “We think it’s incorrect to discuss this tax on robots and trade as if they are our only tools for redistribution.”
What are other potentially more impactful tools for addressing inequality?
The income tax in the U.S. (consolidated with state taxes, EITC [Earned Income Tax Credit], etc.) is a very important tool for redistribution and is a key policy instrument (as reflected by its size and broadness and the discussion and political debates about it). This to us is key and we feel that many discussions surrounding these issues seem to not incorporate this.
Denial of responsibility! galaxyconcerns is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.