Take the figure caption above. Pressing shift+enter woill give you a Chat GPT and Claud full description of the picture that is also in this article. We access picture smart with the keystroke ins+space, p. There are a list of commands by pressing question mark to get help. This aspect is good for us. We can get a generalized description by pressing a lower case letter or full details by capitalizing the letter if my memory of the tool is correct.
But we know by reading tons of articles that this is going to be abused by having actors feed it something and ask it to do something else with it.
I love the idea of Picture Smart, it has opened my eyes in many different articles where images are used to illustrate how things are laid out and we put those descriptions in articles like this so people can learn.
Thanks to the new code I learned called “figcaption” I can caption that image with these tools and we can go about our way.
But abusing tools by making malware evasive, or asking a tool to even create malware that has never been seen by tools like Virus Total should never be allowed, even by people like you and I.
But Google’s tools aren’t the only ones being abused. Open AI’s tools are also being abused and that is discussed from within the article.
There are four things in a list that I want to highlight that may or may not be surprising to people who may be coming across this.
The article in part says:
Google says APTs from Iran, China, North Korea, and Russia, have all experimented with Gemini, exploring the tool’s potential in helping them discover security gaps, evade detection, and plan their post-compromise activities. These are summarized as follows:
- Iranian threat actors were the heaviest users of Gemini, leveraging it for a wide range of activities, including reconnaissance on defense organizations and international experts, research into publicly known vulnerabilities, development of phishing campaigns, and content creation for influence operations. They also used Gemini for translation and technical explanations related to cybersecurity and military technologies, including unmanned aerial vehicles (UAVs) and missile defense systems.
- China-backed threat actors primarily utilized Gemini for reconnaissance on U.S. military and government organizations, vulnerability research, scripting for lateral movement and privilege escalation, and post-compromise activities such as evading detection and maintaining persistence in networks. They also explored ways to access Microsoft Exchange using password hashes and reverse-engineer security tools like Carbon Black EDR.
- North Korean APTs used Gemini to support multiple phases of the attack lifecycle, including researching free hosting providers, conducting reconnaissance on target organizations, and assisting with malware development and evasion techniques. A significant portion of their activity focused on North Korea’s clandestine IT worker scheme, using Gemini to draft job applications, cover letters, and proposals to secure employment at Western companies under false identities.
- Russian threat actors had minimal engagement with Gemini, most usage being focused on scripting assistance, translation, and payload crafting. Their activity included rewriting publicly available malware into different programming languages, adding encryption functionality to malicious code, and understanding how specific pieces of public malware function. The limited use may indicate that Russian actors prefer AI models developed within Russia or are avoiding Western AI platforms for operational security reasons.
Google also mentions having observed cases where the threat actors attempted to use public jailbreaks against Gemini or rephrasing their prompts to bypass the platform’s security measures. These attempts were reportedly unsuccessful.
This means that Google is starting to actually care, although as we mentioned in a prior blog post about the 2.3 billion attacks article, I highly doubt it. Maybe its millions, but whether its millions or billions, i still don’t believe it.
Remember I said above that Open AI was also targeted? That paragraph says:
OpenAI, the creator of the popular AI chatbot ChatGPT, made a similar disclosure in October 2024, so Google’s latest report comes as a confirmation of the large-scale misuse of generative AI tools by threat actors of all levels.
So this confirms that all of these types of things can be used for bad and I bet there’s nothing in there terms of service to say if you are found to do bad things, you’ll lose your access. Maybe its time.
While jailbreaks and security bypasses are a concern in mainstream AI products, the AI market is gradually filling with AI models that lack proper the protections to prevent abuse. Unfortunately, some of them with restrictions that are trivial to bypass are also enjoying increased popularity.
Take Deep Seek. Kim Komando even said to use it on a totally different device, with nothing else of value. The final two paragraphs of this article state:
Cybersecurity intelligence firm KELA has recently published the details about the lax security measures for DeepSeek R1 and Alibaba’s Qwen 2.5, which are vulnerable to prompt injection attacks that could streamline malicious use.
Unit 42 researchers also demonstrated effective jailbreaking techniques against DeepSeek R1 and V3, showing that the models are easy to abuse for nefarious purposes.
Don’t forget that we covered an AI tool that can spy on you.
And if you thought that was bad, your browser habits and other things can now drive the price of items you may buy.
I’m sure there are other blog posts that can be similar to this discussion but I’ll leave you to find those.
The full article from Bleeping Computer is titled Google says hackers abuse Gemini AI to empower their attacks if you would like to read it. This is definitely going to be an intresting time, and we’ll be along for the ride.
Just to give you how picture smart worked before, there is a keystroke that allows you to use it in legacy mode without the AI tools.
For the image on the now linked article, legacy mode says:
Caption is a planet with rings around it.
This tag describes the photo
screenshot.
This tag possibly describes the photo
light.
This tag vaguely describes the photo
night.
Decide what works best for you, as we want you to make your own choice. Catch you all soon!