2022 Metagame Forecast: Stoa TP (Labor and Weapons)

In the last post, we analyzed the big picture stuff that will shape the metagame this year. Today, we’ll look at the four main topic areas. You should focus your research here. When you’re looking for a new case - even late in the season - these are always good starting points.

Automation of Labor

Robots - and now, AI - are eliminating jobs at an unprecedented rate. The more sophisticated the AI, the less safe any particular job seems to be. The future will fall somewhere between these two extremes:

Extreme 1: More jobs. Almost every machine ever invented has disrupted the demand for labor. Horse-drawn plows eliminated the need for teams of humans. The automobile eliminated a wide range of jobs in the stable and wagon professions. But the improved efficiency created all kinds of new jobs and career possibilities that didn’t exist before. The AI revolution will do the same, creating exciting new jobs in fields we can’t even imagine yet.

Extreme 2: No demand for labor. The whole concept of a workforce will become obsolete, shattering contemporary economic models like capitalism and socialism. We’ll have to build a whole new economic system to adapt to a post-labor world, or impose strict regulations to keep AI from taking over.

Learn how to argue both extremes and a variety of positions in between. Affirmatives can embrace either direction, arguing that the USFG should change its policy to either accelerate or prevent the automation of labor.

This topic area just looks at the economic effect on labor. It doesn’t ask if AI will be inferior to the humans workers it replaces. That comes later.

Autonomous Weapons

Flying killer robots have been around for decades. They were scary enough when they were controlled by human drone pilots. But as AI takes over, we're in danger of building a world where humans live in fear of machines. This is, of course, such a ubiquitous trope in science fiction that it’s practically its own genre.

A few months ago, a lethal drone strike was carried out entirely by AI without controller input. In other words, a human was killed by an algorithm. The future is now, and it is terrifying.

Many of the brightest minds alive today are making dire predictions about the future of autonomous weapons. We have every reason to be concerned. This will probably become a vanilla case area.

Affirmatives are likely to take an AI Bad stance here and argue that autonomous weapons should be banned or heavily regulated. Negatives will either argue that the dangers are overblown, or that we risk becoming weak to enemies who are less scrupulous in their development.

If you enjoy exotic cases, google “Gray Goo.”

In the next post, we’ll examine two more case areas that deserve your attention.

Joseph AbellComment