DNS Filtering Blog: Latest Trends and Updates | DNSFilter

Imposter Syndrome: AI, to Be or Not to Be (Phished)

Written by Gregg Jones | Mar 22, 2023 8:00:00 PM

It’s late, and this paper on the esoteric usage of small bowls of fruit in Renaissance-era paintings as extended metaphors for…..what was it again? 

“Man, this paper is a drag, I’ve lost the thread, do I belong here?”, you think while staring at the ceiling. You pull out your phone and open Twitter. A doom scroll later, you end up on a thread about working late and work habit shortcuts. Someone has got to have figured this out before.

User Amogus420 recommends using bots. “It's just that easy! Plug in one or two things and bam full paper, full credit!”  

“Oh yeah,” you mutter under your breath. AI Generated content, you’ve messed with a few filters before to try that one trend, and a different one to touch up a photo. Instead, it added a 7th finger and inexplicable rows of teeth. Yikes. That wasn’t a thought to have at 2am.

You begin wondering—what else can it do? Wait, isn’t this a weird version of cheating? Is this malicious? 


Many discussions all across the internet are ablaze with the whispers of AI generated content. Artwork, articles, voice emulators, bots—there are multiple ways that AI can be leveraged. A popular tool that’s been brought up recently is the AI text generation bot: ChatGPT. But what is ChatGPT? Can it really help you with that Renaissance fruit problem? 


Words, Words, Words 

Chat GPT (Generative Pre-trained Transformer) is a type of machine learning model that uses deep neural networks to generate text that resembles human language. The model is trained on large amounts of text data from a variety of sources, such as books, articles, and websites, using a technique called unsupervised learning. During training, the model analyzes the patterns and relationships within the text data to learn how words and phrases are used in context. It does this by breaking down the text into smaller units, called tokens, and representing each token as a vector in a high-dimensional space. Once the model has been trained, it can be used to generate new text by predicting the most likely sequence of tokens given an initial prompt or seed text. This is done by feeding the prompt into the model and allowing it to recursively generate the next token in the sequence based on the probabilities of different possible options. The resulting text output is not pre-programmed or hard coded but is instead generated on the fly based on the model's learned patterns and relationships. This allows Chat GPT to produce a wide range of responses that are often indistinguishable from human-generated text.
—----

There’s no denying that this can be a useful tool, and it does bring to question the ethical use of this tool. You grit your teeth at the prospect of getting roasted by your professor for cheating.

Can this be used in the cyber crime space? The unfortunate answer is yes.

To understand how, let’s look at the anatomy of a phish and how these learning paths can be tweaked to make a more convincing piece of bait. 

Anatomy of a Phish

Typically, what makes a phish most compelling and effective is directly correlated to how panicked/concerned the victim is. 
An email from your bank stating a large amount of funds transferred, featuring a big red “Challenge Transaction” button. 
A huge subscription bill you definitely didn’t pay for that displays a phone number to cancel. 
An impassioned email from a relative stating that they need money now to get a ticket out of *country* they were visiting and their wallet was stolen, can they borrow your card? 

These all give a variety of visceral responses and often give an immediate “out” in the message. This is how the phish will commonly get your information/money or scam you out of 12 Google Play cards valued at $100. A distressing situation followed by an immediate drive to make a response: Horrifically simple and terribly effective. 

Often, if one takes a breath and looks at the emails/texts or whatever medium logically, you can avoid these impassioned pleas:

“Wait… my bank’s logo hasn't looked like that in years!” 
“This grammar is off, and there hasn’t been a charge/notification from my credit card…” 
“I don't even have an Aunt Gernoma!” 

And thus disaster is averted.

The (Phishing) Lure of AI

What if AI was trained on more common “alert” emails and they were more believable? The AI system can analyze data about the recipient, such as their location, interests, and previous online behavior, to tailor the email to the recipient and increase the likelihood that they will fall for the scam. 

“I definitely subscribe to this meal subscription box! How dare they charge me 200 dollars more?” 

And because the logic step is less effective, one could be driven by the impulse to trust the email/notification. 

I asked ChatGPT to make me a phishing email premise. I got this in response:

“I'm sorry, but I cannot provide information on illegal or unethical activities, such as phishing. Phishing is a fraudulent attempt to obtain sensitive information such as passwords and credit card details by posing as a trustworthy entity. It is a criminal offense and can harm individuals and organizations. I encourage you to act responsibly and within the bounds of the law.”
  
It’s good it has some ethical restrictions in place. But there are certainly ways to circumvent these. And as other AI bots are developed, there are undoubtedly less-ethical makers that could, in theory, train a model on the in’s and out’s of generating phishing sites. 

Cyber criminals can leverage Artificial Intelligence (AI) in several ways, including:

1. Automated hacking: AI can be used to automate the process of discovering and exploiting vulnerabilities in software and systems.

2. Mimicking human behavior: AI can be used to create bots that can mimic human behavior, such as sending emails or messages that appear to be from a trusted source.

3. Evading detection: AI can be used to develop malware that can evade traditional security measures, such as antivirus software.

4. Predicting security measures: AI can be used to analyze patterns of behavior and predict future security measures, making it easier for cybercriminals to stay ahead of the curve.

Preventing Existential Crisis (and Phishing) 

Okay, this is scary. What can I do? Well, use the same abundance of caution as always—there is more and more research in AI and this trend will not stop anytime soon. 
Strong ways to start: Take a step back, examine closely, and evaluate the situation. Go to the origin of the email. Instead of trusting a notification from Paypal that feels weird and distressing, log in with a separate browsing instance and contact support directly from there. This will help you discover whether it's smoke and illusion, or if something is legitimately concerning. 

Now, back to your late-night fruit paper quandary: Is it ethical to use AI-generated text? 
In the same way as a phish, it could be seen as duplicitous to use and not cite your sources. It’s probably not wise to use it to make this paper more bearable. 
Instead, make a cup of tea and try again. It is highly likely that most cheat detection engines will pull patterns of words that don't match or aren’t cited by the bot itself, anyway. It certainly can be useful to get started, even brainstorm, or be leveraged as a search engine, but it’s a big no from me and many honor boards. 

Before I leave you here, did you realize you’ve been “phished” at some points in this article? I used ChatGPT to generate a few paragraphs here and there. See if you can point them out. If you can, reach out to let us know! 

That’s all from the Intelligence Desk today. See you next time—with a 100% human-written article.