Everything will be suspicious

Longtime NoContest.ca friend Chet Wisniewski has “Three Cybercrime Predictions in the Age of ChatGPT.” I don’t know anyone who writes more clearly and helpfully on these things.

Organizations (like my own) have trained their employees to recognize phishing and other types of scams. Such training, it seems, will likely be almost useless going forward. In this piece written for the Forbes Technology Council, Chet writes,

We’ve relied on end users to recognize potential phishing attacks and avoid questionable Wi-Fi—despite the fact that humans aren’t generally as good at recognizing fraud as we believe.

Still, employees have previously had some success in spotting fishy messages by recognizing “off-sounding” language. For example, humans can notice language irregularities or spelling and grammar errors that signal phishing attempts, like a supposed email from an American bank using British English spelling.

AI language and content generators, such as ChatGPT, will likely remove this final detectable element of scams, phishing attempts and other social engineering attacks. A supposed email from “your boss” could look more convincing than ever, and employees will undoubtedly have a harder time discerning fact from fiction. In the case of these scams, the risks of AI language tools aren’t technical. They’re social—and more alarming.

Developing programs to detect ChatGPT content and to warn users will run into this dilemma, though:

Many legitimate users already using the tool to quickly create business or promotional content. But legitimate use of AI language tools will complicate security responses by making it more difficult to identify criminal instances.

For example, not all emails that include ChatGPT-generated text are malicious, so we can’t simply detect and block them as a blanket rule. This removes a level of certainty from our security response. Security vendors may develop “confidence scores” or other indicators that rate the likelihood that a message or email is AI-generated. Similarly, vendors may train AI models to detect AI-generated text and add a warning banner to user-facing systems. In certain cases, this technology could filter messages from an employee’s inbox.

It’s a thrilling and unnerving time to be a business communications professor. I have a ton to learn and think about before my next term starts.

This entry was posted in Robert's posts and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *