Caitch-22

Noted via MindMatters, an article in Wired discusses a growing problem for freelance writers. Publishers are switching to AI for their writing, and at the same time they’re kicking out human writers for “suspicion of using AI.”

= = = = = START QUOTE:

As a local journalist in Bucyrus, Ohio, Gasuras relies on side hustles to pay the bills. For a while, she made good money on a freelance writing platform called WritersAccess, where she wrote blogs and other content for small and midsize companies. But halfway through 2023, the income plummeted as some clients switched to ChatGPT for their writing needs. It was already a difficult time. Then the email came.

“I only got one warning,” Gasuras said. “I got this message saying they’d flagged my work as AI using a tool called ‘Originality.’” She was dumbfounded. Gasuras wrote back to defend her innocence, but she never got a response. Originality costs money, but Gasuras started running her work through other AI detectors before submitting to make sure she wasn’t getting dinged by mistake. A few months later, WritersAccess kicked her off the platform anyway. “They said my account was suspended due to excessive use of AI. I couldn’t believe it,” Gasuras said. WritersAccess did not respond to a request for comment.

= = = = = END QUOTE.

Though Wired doesn’t say it, the trick is obvious. The publisher wants to go purely robotic but can’t fire all its humans without a plausible reason. So the “checker” is worth paying to avoid lawsuits for arbitrary firing, or perhaps to avoid offending a shareholder who dislikes AI.

It’s another variation on the old cancel trick. Execs and professors who are fired for “politics” or “sex” or “plagiarism” are really fired because the boss simply doesn’t like them. Firing for honest reasons invites lawsuits, so a plausible dishonest reason has to be found. Sometimes the fake reason is real but wouldn’t be worth firing on its own. Often the fake reason is constructed using entrapment. Clearly this AI checker is pure entrapment.