Program designed to equip women and underrepresented individuals with the necessary skills and knowledge to succeed in cybersecurity. Read More
Related Posts
Two People Arrested in Australia and US for Development and Sale of Hive RAT
Two People Arrested in Australia and US for Development and Sale of Hive RAT
[[{“value”:”
Authorities in Australia and the US have arrested and charged two individuals for developing and selling the Hive RAT.
The post Two People Arrested in Australia and US for Development and Sale of Hive RAT appeared first on SecurityWeek.
“}]] Read More
SecurityWeek RSS Feed
Eric Tillman: A creative way into cyber. [Intelligence]
Eric Tillman: A creative way into cyber. [Intelligence]
Eric Tillman, Chief Intelligence Officer at N2K Networks sits down and shares his incredibly creative journey. Eric loved being creative from a young age. When he started to think about a career he wanted to incorporate his love of creativity into his love for tech and turn it into an intelligence career. Eric started by joining the Navy, which set him on this path to work in cyber where he shared his talents with several big companies, including, Booz Allen Hamilton, Lockheed Martin, and Okta, eventually ending up at our very own N2K Networks. Eric shares the advice that there is something for everyone in this field, and even though he wanted to start his journey in a creative way, he found that combining his love for tech and art helped him to pave the way to where he is now. He says ” A lot of people get here from a very technical background and um, it really almost doesn’t matter um, where you came from, there is something in cybersecurity that takes advantage of the skills that you bring to the table and, um, either way, there’s plenty of room here for everyone.” We thank Eric for sharing his story with us. Read More
The CyberWire
Cybercriminals are Showing Hesitation to Utilize AI When Executing Cyber Attacks
Cybercriminals are Showing Hesitation to Utilize AI When Executing Cyber Attacks
Media reports highlight the sale of LLMs like WormGPT and FraudGPT on underground forums. Fears mount over their potential for creating mutating malware, fueling a craze in the cybercriminal underground.
Concerns arise over the dual-use nature of LLMs, with tools like WormGPT raising alarms.
The shutdown of WormGPT adds uncertainty, leaving questions about how threat actors view and use such tools beyond publicly reported incidents.
Document
Protect Your Storage With SafeGuard
Is Your Storage & Backup Systems Fully Protected? – Watch 40-second Tour of SafeGuard
StorageGuard scans, detects, and fixes security misconfigurations and vulnerabilities across hundreds of storage and backup devices.
Cybercriminals are Showing Hesitation
AI isn’t a hot topic on the forums Sophos researchers examined, with fewer than 100 posts on two forums compared to almost 1,000 posts about cryptocurrencies.
Possible reasons include AI’s perceived infancy and less speculative value for threat actors compared to established technologies.
There’s been a lot of speculation about how threat actors might weaponize AI, given the hype around ChatGPT and other LLMs – especially with developments like WormGPT. But what do the criminals themselves think?
— Sophos X-Ops (@SophosXOps) November 28, 2023
LLM-related forum posts heavily focus on jailbreaks—tricks to bypass self-censorship. The concerning thing is that the jailbreaks are publicly shared on the internet through various platforms.
Despite threat actors’ skills, there’s little evidence of them developing novel jailbreaks.
Many LLM-related posts on Breach Forums involve compromised ChatGPT accounts for sale, reflecting a trend of threat actors seizing opportunities on new platforms.
The target audience and potential actions of buyers remain unclear. Researchers also observed eight other models offered as a service or shared on forums during their research.
Here below, we have mentioned those eight models:-
BlackHatGPT
HackBot
PentesterGPT
PrivateGPT
Exploit forums show AI-related aspirational discussions, while lower-end forums focus on hands-on experiments. Skilled threat actors lean towards future applications, while less skilled actors aim for current use despite limitations.
Besides this, researchers also observed that with the help of AI, a multitude of codes were generated for making the following types of illicit tools:-
RATs
Keyloggers
Infostealers
Some users explore questionable applications for ChatGPT, including social engineering and non-malware development.
Skilled users on Hackforums leverage LLMs for coding tasks, while less skilled ‘script kiddies’ aim for malware generation.
Operational security errors are evident, such as one user on XSS openly discussing a malware distribution campaign using ChatGPT for a celebrity selfie image lure.
Operational security concerns arise among users about using LLMs for cybercrime on platforms like Exploit.
Some users on Breach Forums suggest developing private LLMs for offline use. However, the philosophical discussions on AI’s ethical implications reveal a divide among threat actors.
Experience how StorageGuard eliminates the security blind spots in your storage systems by trying a 14-day free trial.
The post Cybercriminals are Showing Hesitation to Utilize AI When Executing Cyber Attacks appeared first on Cyber Security News.
Cyber Security News