PSA warning about ways scammers and spammers use AI to defraud you

This was something that was posted to Mastodon through Brian Krebs. He wrote:

FBI releases PSA warning about all the ways that cybercriminals are using AI to commit fraud on a larger scale and to increase the success of their scams. The advisory warns about deepfaked videos and voice calls, as well as AI generated profile images to impersonate people.

Among their recommendations:

-Create a secret word or phrase with your family to verify their identity.

-Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic teeth or eyes, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, lag time, voice matching, and unrealistic movements.

-Listen closely to the tone and word choice to distinguish between a legitimate phone call from a loved one and an AI-generated vocal cloning.

-If possible, limit online content of your image or voice, make social media accounts private, and limit followers to people you know to minimize fraudsters’ capabilities to use generative AI software to create fraudulent identities for social engineering.

-Verify the identity of the person calling you by hanging up the phone, researching the contact of the bank or organization purporting to call you, and call the phone number directly.

-Never share sensitive information with people you have met only online or over the phone.

-Do not send money, gift cards, cryptocurrency, or other assets to people you do not know or have met only online or over the phone.

https://www.ic3.gov/PSA/2024/PSA241203
image: A PSA from the FBI reads:

The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes. Generative AI reduces the time and effort criminals must expend to deceive their targets. Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud. The creation or distribution of synthetic content is not inherently illegal; however, synthetic content can be used to facilitate crimes, such as fraud and extortion.1 Since it can be difficult to identify when content is AI-generated, the FBI is providing the following examples of how criminals may use generative AI in their fraud schemes to increase public recognition and scrutiny.

AI-Generated Text
Criminals use AI-generated text to appear believable to a reader in furtherance of social engineering,2 spear phishing,3 and financial fraud schemes such as romance, investment, and other confidence schemes or to overcome common indicators of fraud schemes.generative AI in their fraud schemes to increase public recognition and scrutiny.

He subsequently wrote:

BrianKrebs: I would add to this list is something I have tried to do with those in my immediate orbit who need a little more help against scams and spams: Set their phone so that incoming calls are limited to people on their contacts list; all the rest go to voicemail. At this point, we are way beyond expecting everyone to be experts at spotting fake this or that.

The first part comes from this PSA IC3 post which is still informative today.

The second were thoughts that he had on how we could solve this problem, except that it causes more problems for people who are legitimately wanting to call.

I’d say in part that we need to get all numbers of a business, and explain that we’ve got our phone set up where unknown callers are going to voice mail, and since we don’t know which number they’re calling from, we want to hear from them. It may be more work, and that’s why people don’t do the second item.

Seeing how scammers are now using AI to do some of this work like correcting spelling so it looks more real and having pictures and video that may look more real, we’ve really got to be on our guard.

Let’s step through a few of these headings of this PSA and see what we could gather from it. I’m not taking it verbaitum.

AI-Generated Text

They can now make the text more real as discussed above. The messages can now go out faster, reaching a wider audience. As discussed, language is always the barrier, so something to help with that is always good for them, and bad for us. They are also using AI bots to help convince people of what they’re trying to pitch.

AI-Generated Images

This may be a problem for a lot of us who can’t see pictures. While Picture Smart and other tools can be used to assist us, the tools can also be used for bad, and images may not matter so much for this group. The images can be used for social media platforms to make the account be believable and not flagged as malicious. When it is found out, they’ve already gotten their targets.

Identification documents such as birth certificates, passports and others can be faked easily enough and pass the human eye look.

They can even change photos to make people do things they wouldn’t want to do otherwise.

One of the big things they can do which is highlighted here is that they can make photos of natural disasters, even if one hasn’t happened in that area. This would spark donations and pleas for those donations of all kinds. While the JRN does have a donations page up we have tried to allow people to utalize it if they wish and of course offer our downloads and services like this article for free. If you feel you get a benefit, please do donate.

I think the other big thing we’ve talked about from this section talks about sextortion and how criminals can do what they want with a photo, I.E. feed it a photo and have the clothing removed from it and say they want payment for it or it’ll go all over the Internet because its your child.

Other headings include:

  • AI-Generated Audio, aka Vocal Cloning
  • AI-Generated Videos and
  • Tips to protect yourself

Some of the tips like creating secret words to verify communication may be key, depending on your situation and who you’re dealing with. If its a clone of your daughter, maybe ask her to verify say her birthday or something. The delay in response you hear as the actor is getting that in to the AI clone would be enough that you can simply hang up afterword and go about your day. One of the items indicate that we should limit our interaction online. In that section, it states to make social media private as well. I think its too late for me, I’ve been podcasting for way too long, and I’ve got many many articles and podcasts out there.

We can verify a bank or place that is calling by hanging up and calling the number we have for customer service.

I definitely don’t share things like sensitive info to much of anyone anyway, and I always ask if that is absolutely necessary anyway.

Finally, they indicate not to send money or gift cards to someone you only met online. The JRN as I said is optional and its all online. We’ll never call you to ask for money and won’t make up sob stories on why we need X amount of money where X would be an amount. Please click through and read the press release. I wrote this article in my words using the article as a guide.

Great tips here and great thoughts, Brian. Anyone else have any thoughts? (888) 405-7524 or (818-527-4754. If I’m available, I can notate what you have to say as part of a things to ponder segment and or I can send you to voice mail where you can record your thoughts.

Thanks so much for reading and learning with us! Here’s the link again to the press release on AI being used to assist scammers.

Make it a great day!


Discover more from The Technology blog and podcast

Subscribe to get the latest posts sent to your email.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.