The last essay described how a small number of committed individuals can use new technology to take control of institutions staffed with ideologically opposed employees. In the comments Miles McStylez observes that:
the activists can not only be monitored via AI but also replaced by it in many cases.
This is clearly true. It’s not hype or buzzy conference speak to say that AI can yield huge productivity improvements in almost any kind of office work, even if it will take years to fully reap the benefits. Yet efficiency savings weren’t mentioned. That’s because control and efficiency are orthogonal concerns, so sometimes increasing control requires accepting less efficiency. It’s also because winning the battle to control an institution requires absolute focus. Generic process upgrades can easily become a distracting tarpit of vague requirements, scope creep, apathetic developers, union resistance and many other forms of institutional obesity.
Here are a couple more examples of how to control hostile institutions, but this time we’ll focus on areas where control is easier than efficiency.
I. Education
Gaining power in a democracy requires promising to address immediate concerns of large voting blocs. It’s difficult to find a voting bloc the left alienates harder and faster than parents, many of whom were horrified to discover during COVID what exactly was going on in their children’s classrooms. This is an especially neuralgic issue in the USA, where Libs of TikTok rose to fame by publishing videos of teachers engaging in unacceptably extreme left wing behaviors. The woman behind it has now been appointed to a government position supervising the Oklahoma school library system.
Replacing teachers with AI is a project I’d rate as suicidally hard for any realistic government. Computers have been promising to revolutionize the classroom for decades, and in many ways they’ve done exactly that. But even classrooms full of iPads haven’t made a dent in the number of teachers. Lack of clarity over what exactly an AI teacher would do, uncertainty over their effectiveness and extreme (but understandable) conservatism amongst parents - none of whom want their child to be the subject of science experiments - all combine to make replacing teachers with AI an extremely long term prospect, if it ever happens at all.
Monitoring teachers, on the other hand, should now be well within the reach of already existing technology. After using the techniques I previously described to take control of the education department, the curriculum can be adjusted to combat wokeness. AI can then be deployed to ensure compliance. The demands on the school system itself are minimal: just ensure there are microphones near the front of the classroom and a camera near the back so each classroom can be recorded continuously. A small ‘tiger team’ of Tech Right deployed into the education department can set up and mail small Raspberry Pi style computers that simply forward AV from the cameras to an Amazon S3 bucket. Local contractors can handle installation and repair. From that point on it’s a matter of automatically transcribing the contents of classes and feeding those transcripts into LLMs tasked with searching for non-compliant teaching. This is the type of highly parallelizable big data task that modern cloud infrastructure is well optimized for. The biggest challenge for existing tech would simply be producing legible transcripts during times when the kids are all talking over each other, or picking up on non-compliance that is private or quiet enough to not be detected by the microphones.
Such a monitoring scheme can clearly increase control, but it’s not going to improve productivity. In fact productivity would go down, because all the existing teachers remain employed whilst you now also need to spend money on hardware, developers and a small cadre of loyal civil servants to handle escalations.
I believe that the education system is a sufficiently important creator of wokeness that the extra cost would be worth it, and that there’s enough inefficiency and unjustifiable activity in government that the money needed could be easily reclaimed elsewhere (e.g. by not funding bullshit academic studies). The cost of data processing just isn’t that large; by far the biggest expense would be the money spent on repairing broken or vandalized cameras.
II. Disability benefits assessment
Rich democracies lose large sums of money to welfare fraud. In America the problem is so large that disability entitlement spending tracks overall economic performance.
In systems that have better anti-fraud protections there is no link between disability awards and recessions, but actually creating this outcome is difficult. Assessing claims requires a huge workforce who are strongly incentivized to take pity on the poor people in front of them by giving away other people’s money. An award makes them feel good, a denial makes them feel bad and there’s nothing in it for them. Combatting this requires a massive system of checklists, mechanical rule following, appeals, counter-appeals and so on. This is bad for both the genuinely disabled and the civil servants who assess these claims - real people will inevitably slip through the cracks, whilst others will learn how to exploit the systems to pump money into their own pockets.
Disability assessment isn’t something that can be fully replaced by AI any time soon. Someone has to physically prod and poke a human being, take photos, make notes, tell them where the bathroom is and so on. An ideal assessor would also re-stabilize people if they lose emotional control, something best done by a friendly face.
Whilst you can’t directly improve efficiency using AI you can certainly improve control. Automated analysis of case files can verify compliance with policy. Importantly, LLMs can also improve service quality - a critical part of getting the voter consensus to make sweeping changes. How?
The system can be more flexible and less tickbox oriented, because rigid bureaucracy can be replaced by loyalist AI that can review every case file whilst also understanding the intended spirit of the requirements. AI can have emotion trained out of it much more easily than a human can.
AI provides sufficient capacity to automatically re-review every case against new rules or hypotheticals. For example, if a particular condition was not previously covered by the rules but becomes eligible for welfare, everyone who previously presented with that condition but was rejected can be identified. Conversely, if it’s discovered that a particular condition is susceptible to fraud but can go under many names and descriptions (e.g. the many variants of feeling sad), then a tightening of the rules can be applied retroactively as well.
Different policies can be prototyped and re-applied against every open case file, providing precise projections of how much money could be saved by different rule tweaks. As models can not only re-assess cases but also generate ideas for rule changes, in theory a fully automated budget targeting system could be built. This latter idea is highly speculative and would need to be considered much more of an R&D project than some of the other proposals, but it’s feasible at least in principle.
III. Control through stack ranking
You can’t directly control a workforce using AI. You can identify people who aren’t doing what they’re told, but this is of no use if those employees feel invulnerable. McStylez astutely observes that if workers genuinely don’t want to follow orders, then a threatening environment of cuts-induced job insecurity can be a strong incentive to do what’s being asked of them.
But how exactly do you do this? A typical approach to budget cuts is to fully defund initiatives that are disliked by the new leadership, but this means workers in ‘untouchable’ programmes see no incentive to sharpen up.
A better approach is enforced stack ranking. This requires managers to create a linear ordering of all their direct reports by comparing them against each other. In other words, they are required to work out which employees they value the most and the least. You then fire all the people at the bottom of the rankings.
Stack ranking is a brutal but effective way to quickly reduce the size of an organization. Practiced repeatedly it destroys morale, as it’s entirely expected to find teams made up of uniformly acceptable performers in which there are no obviously negative outliers. Stack ranking in this case demands managers engage in nearly random firings, something guaranteed to wreck morale and whatever institutional loyalty may still exist amongst the survivors.
Stack ranking also implicitly assumes that managers will be judged on the basis of their own delivery, and thus will want to actually let go of the worst employee by merit (vs the whitest man, etc). It isn’t something that can be dropped into a culture in which merit isn’t incentivized at all. How to address that problem is a topic for a later post.
Nonetheless, during a period in which you need to rapidly establish compliance across a large group of people the combination of AI monitoring, a lightweight offboarding process and forced stack ranking for troublesome departments can be effective.
It should hopefully go without saying that rule by fear is a poor strategy compared to genuinely inspiring and motivating the troops. These articles on AI-driven institutional takeover are intended to describe short term strategies capable of delivering immediate wins for sane people faced with departments purged by the woke, buying time to lay the groundwork for a true ‘hearts and minds’ campaign of re-normalization. What such a campaign might look like is a topic that we will return to in future.