A new information stealer malware named Mystic Stealer is gaining traction among cybercriminals on prominent underground forums.
The post New Information Stealer ‘Mystic Stealer’ Rising to Fame appeared first on SecurityWeek.
SecurityWeek RSS Feed
The all in one place for non-profit security aid.
A new information stealer malware named Mystic Stealer is gaining traction among cybercriminals on prominent underground forums.
The post New Information Stealer ‘Mystic Stealer’ Rising to Fame appeared first on SecurityWeek.
SecurityWeek RSS Feed
Chinese Hackers Silently Weaponized VMware Zero-Day Flaw for 2 Years
An advanced China-nexus cyber espionage group previously linked to the exploitation of security flaws in VMware and Fortinet appliances has been linked to the abuse of a critical vulnerability in VMware vCenter Server as a zero-day since late 2021.
"UNC3886 has a track record of utilizing zero-day vulnerabilities to complete their mission without being detected, and this latest example further Read More
The Hacker News | #1 Trusted Cybersecurity News Site
Pakistan-linked Malware Campaign Evolves to Target Windows, Android, and macOS
Threat actors with ties to Pakistan have been linked to a long-running malware campaign dubbed Operation Celestial Force since at least 2018.
The activity, still ongoing, entails the use of an Android malware called GravityRAT and a Windows-based malware loader codenamed HeavyLift, according to Cisco Talos, which are administered using another standalone tool referred to as GravityAdmin.
The Read More
Hackers Using ChatGPT to Generate Malware & Social Engineering Threats
Large language models (LLMs) and generative AI are rapidly advancing globally, offering great utility but also raising misuse concerns.
The rapid modernization of generative AI and its AI counterparts will transform the complete future of cybersecurity threats significantly. However, besides its potential risks, it’s also important to appreciate the value of generative AI in legitimate applications.
Cybersecurity researchers at the Threat Intelligence Team of Avast recently reported that hackers are actively abusing ChatGPT to generate malware and social engineering threats.
In recent times, AI-driven scams have been on the rise, making it easier for cybercriminals or threat actors to craft convincing lures like:-
Emails
Social scams
E-shop reviews
SMS scams
Lottery scam emails
Rising threats use advanced tech, and this scenario is reshaping the battlefield of AI technologies, mirroring abuses in areas like-
Cryptocurrencies
Covid-19
Ukraine conflict
ChatGPT’s fame attracts hackers more for its fame than AI conspiracy, making it mature for exploration in their works.
Currently, ChatGPT isn’t an all-in-one tool for advanced phishing attacks. Attackers often require templates, kits, and manual work to make their attempts convincing. Multi-type models, like LlamaIndex, could enhance future phishing and scam campaigns with varied content.
Here below, we have mentioned all the TTPs and mediums used by the threat actors to abuse the ChatGPT:-
Malvertising
YouTube scams
Typosquatting
Browser Extensions
Installers
Cracks
Fake updates
LLMs simplify malicious code generation, but some expertise is still needed. Specialized malware tools can complicate the process by evading security measures.
Creating LLM malware prompts demands precision and technical expertise, with restrictions on prompt length and security filters limiting complexity.
AI tech has transformed spam tactics significantly, with spambots unwittingly revealing themselves by sharing ChatGPT’s error messages, highlighting their presence.
Notably, spambots now exploit user reviews by copying ChatGPT responses, aiming to boost feedback and product ratings deceptively.
This highlights the need for vigilance in digital interactions as manipulated reviews mislead consumers into purchasing lower-quality products.
Bad actors can circumvent ChatGPT’s filters, but it’s time-consuming. They often resort to traditional search engines or available educational-use-only malware on GitHub.
Besides this, the Deepfakes are powered by AI, which poses significant threats, fabricating convincing videos and causing damage to reputations, public trust, and even personal security.
Security analysts can employ ChatGPT to generate detection rules or clarify existing ones, aiding both beginners and experienced analysts in enhancing pattern detection tools like:-
Yara
Suricata
Sigma
There are several projects that integrate LLM-based AI assistants, enhancing productivity across various tasks, from office work to technical work.
AI assistants aid malware analysts by simplifying assembly comprehension, disassembling code analysis, and debugging, streamlining reverse engineering efforts.
Here below, we have mentioned the known AI-based assistant tools:-
Gepetto for IDA Pro
VulChatGPT
Windbg Copilot
GitHub Copilot
Microsoft Security Copilot
PentestGPT
BurpGPT
Here below, we have mentioned all the recommendations offered by the security researchers:-
Be cautious of unbelievable offers.
Make sure to verify the publisher and reviews.
Always understand your desired product.
Don’t use cracked software.
Report suspicious activity.
Update your software regularly.
Trust your cybersecurity provider.
Self-education is crucial.
Keep informed about the latest Cyber Security News by following us on Google News, Linkedin, Twitter, and Facebook.
Related Read
ChatGPT to ThreatGPT: Generative AI Impact in Cybersecurity and Privacy
ChatGPT For Penetration Testing – An Effective Reconnaissance Phase of Pentest
PentestGPT – A ChatGPT Empowered Automated Penetration Testing Tool
Hackers are Creating ChatGPT Clones to Launch Malware and Phishing Attacks
The post Hackers Using ChatGPT to Generate Malware & Social Engineering Threats appeared first on Cyber Security News.
Cyber Security News