Navigating the Evolving Landscape of Scams in the Age of AI
Written on
Understanding the Threat of Scams
Recently, I received an unexpected email from a friend in their 80s, who is quite adept with technology.
The email was brief:
Subject: Are you there
Hi,
Can I ask a quick favor from you, please?
Thanks,
[Friend's Name]
Typically, I would expect them to address me by name, but I didn't think much of it at the time. The email originated from their usual address, and my reply also went to that same account—my first step in assessing its authenticity.
Initially, the email landed in my junk folder, which made me second-guess its legitimacy. My friend had recently acquired a new computer, so it was plausible that their settings were off. Still, since I was replying directly to their account, I felt there was little risk involved. After all, they could have genuinely needed assistance, perhaps to take care of a minor task like putting out their garbage.
I responded:
Hi [Friend's Name],
Did you send this? It went into my junk folder.
I'm at the office today, but happy to help if I can.
Paddy
Alarm bells began to sound with the next message I received:
Thanks for your quick response. It's my niece's birthday, and I need to get her a Google Play voucher as a gift. However, I'm having trouble buying it online while traveling. Could you help me purchase it from a nearby supermarket like Sainsbury's or Tesco? I promise to reimburse you as soon as I'm back. I just need to spend £200, and I’ll let you know what to do with the voucher once you have it.
Thanks,
[Friend's Name]
At this point, numerous indicators pointed to a scam, leading me to text my friend, who confirmed that their account had indeed been compromised. Thankfully, they had not lost any money, but it raised concerns about how many others might fall prey to such tactics.
Scams of this nature are alarmingly common, yet this particular instance was notably convincing at first glance. Working in technology, I typically pride myself on detecting red flags—most scams I encounter originate from legitimate names but different email addresses, as seen in the follow-up message.
How many individuals actually succumb to these scams? They must yield some profit; otherwise, scammers wouldn't persist. This case involved a modest amount of money compared to many scams, yet it exceeded what I knew my friend would typically spend on a gift.
The Impact of AI on Scams
This experience prompted me to ponder the implications of current advancements in Artificial Intelligence, or more specifically, Machine Learning, and how they may render future scams even more convincing. This isn't intended to incite fear; rather, it is a cautionary contemplation of what lies ahead and how we can fortify ourselves against potential scams.
Many of us have encountered text messages purporting to be from loved ones who claim to have "lost their phone," hence the different number. While some may be fooled by such stories, they usually require a specific set of circumstances to fully ensnare someone. For instance, receiving a message from an adult child living away from home, combined with late-night fatigue or having consumed alcohol, could lead to hasty decisions in a moment of concern for their safety.
Most people would likely remain skeptical of such scams, especially if they have an emergency "pass-phrase" established in advance to verify a genuine request for help. Setting this up before it's needed is highly advisable.
The first sign of a scam should be the message not coming from the known number of your loved one. However, consider a scenario where you receive a voicemail from your child's actual voice, expressing fear and requesting help, yet they can't answer your call because "someone might hear it." In such a case, you’re left with the unsettling possibility of being manipulated by someone using voice cloning technology.
The technology to imitate voices already exists. A quick online search will reveal various services capable of this. Even if a call originates from a different number, you may still believe you are conversing with your loved one, while in reality, someone else is communicating with you using a modified voice.
Emerging Technologies and Future Risks
Real-time video forgery, while not fully realized in public domains yet, is on the horizon. Current deepfake technology requires substantial processing time to create realistic videos, but advancements may soon allow for genuine-looking interactions through video calls, making it far easier for scammers to deceive.
What does AI have to say about this?
I consulted ChatGPT for its perspective on AI's role in the future of scamming, and it summarized the dual-edged nature of AI's influence. On one hand, AI can enhance fraud detection systems to identify and combat scams more effectively. These systems analyze patterns and anomalies at a speed beyond human capabilities, potentially reducing successful scams.
Conversely, scammers can exploit AI to refine their techniques, automating the creation of convincing phishing emails, deepfake videos, or chatbots that mimic real individuals. This makes distinguishing between genuine and fraudulent communications increasingly challenging. Scammers may also use AI to analyze and predict human behavior, leading individuals to divulge sensitive information or make hasty financial decisions.
As AI continues to evolve, the conflict between those developing fraud detection and those exploiting AI for scams will persist. Staying informed about emerging scams and maintaining vigilance in protecting personal information will be essential. Additionally, collaboration between policymakers and technology companies is crucial to establish regulations and ethical guidelines to prevent AI misuse.
Staying Alert: Protecting Yourself and Loved Ones
While uncomfortable, it's vital to recognize the potential risks posed by evolving technology. This discussion extends beyond typical scams from banks or loan companies—always approach such calls with skepticism and verify their authenticity using known contact numbers.
When it comes to your loved ones, the first step is to attempt contacting them through their known number. If they respond without any awareness of the request for assistance, you can conclude that a scammer has reached out to you. Keep in mind that their phone might be turned off, out of credit, or even lost, allowing someone else to use it.
Security measures on phones are often limited. A scammer can easily switch a SIM card to a different device, potentially utilizing voice-altering software. If the phone or SIM hasn't been reported lost, they can still receive and make calls.
Another concern is "spoofing," a tactic allowing calls or texts to appear as though they're coming from familiar numbers. With voice-changing technology, a caller could mimic your loved one's voice, adding to the deception.
I emphasize the importance of awareness regarding technology's capabilities and the necessity for a healthy skepticism about who you are communicating with. If something feels off, investigate further. Look for inconsistencies in their language or ask questions that only they could answer. Establish a unique code or phrase that verifies their identity—something memorable yet not easily discoverable by outsiders.
Lastly, encourage those close to you to keep their social media profiles private, sharing only with trusted contacts. Avoid participating in seemingly harmless quizzes or requests that could reveal personal security information.
As we navigate the future of technology, it’s crucial to acknowledge that there will always be individuals seeking to deceive us out of our hard-earned money.