A new information stealer malware named Mystic Stealer is gaining traction among cybercriminals on prominent underground forums.
The post New Information Stealer ‘Mystic Stealer’ Rising to Fame appeared first on SecurityWeek.
SecurityWeek RSS Feed
The all in one place for non-profit security aid.
A new information stealer malware named Mystic Stealer is gaining traction among cybercriminals on prominent underground forums.
The post New Information Stealer ‘Mystic Stealer’ Rising to Fame appeared first on SecurityWeek.
SecurityWeek RSS Feed
Earth Hundun’s Hackers Employ Waterbear And Deuterbear Tools For Advanced Cyber Attacks
[[{“value”:”
Hackers always keep evolving their tools to stay ahead of defense systems and exploit new vulnerabilities.
Cybersecurity researchers at Trend Micro reported that the Earth Hundun (BlackTech) cyberespionage group has seen a rise in cyberattacks.
These attacks exploit the Waterbear virus family, which is renowned for its intricate anti-analysis skills and regularly revised loaders, downloaders, and communication protocols by developers.
The most recent version, Deuterbear, uses more elaborate evasion strategies that necessitate a detailed examination of this multifaceted malware weapons stockpile, which is used for spying, especially in the Asia Pacific region.
Since 2009, Waterbear has undergone more than ten versions, with developers continuously working on infection processes until the time when a successful compromise was achieved which resulted in multiple coexistence of these versions among victims.
Document
Stop Advanced Phishing Attack With AI
AI-Powered Protection for Business Email Security
Trustifi’s Advanced threat protection prevents the widest spectrum of sophisticated attacks before they reach a user’s mailbox. Stopping 99% of phishing attacks missed by
other email security solutions. .
It is important to note that some Waterbear downloaders use internal IP addresses as their C&C servers, which suggests that they know the target networks deeply and use multilayer jump servers to persist stealthily and control compromised environments, according to the report.
The fact that these sophisticated techniques are designed for evasion and longevity reflects the advanced nature of these attacks as well as the determined efforts of the threat actors behind this constantly changing malware family.
Deuterbear is the latest Waterbear downloader variant which was active since 2022 and represents a distinct malware entity separate from the original Waterbear downloader category.
This classification originates from significant updates to its decryption flow and configuration structure, marking a notable evolution in the malware’s capabilities.
Here below, we have mentioned all the key differences between the Deuterbear downloader and the Waterbear downloader:-
The Earth Hundun group has been incessantly transforming Waterbear into a more advanced version known as Deuterbear since 2009.
Using HTTPS encryption, debugger/sandbox checks, changed decryption, and updated protocols makes Deuterbear the most recent in sophistication infection methods and anti-analysis mechanisms.
Earth Hundun still penetrates Asia-Pacific targets despite these defenses, with an ever-improving Waterbear that poses considerable difficulties.
e669aaf63552430c6b7c6bd158bcd1e7a11091c164eb034319e1188d43b5490c Trojan.Win64.WATERBEAR.ZTLC
0da9661ed1e73a58bd1005187ad9251bcdea317ca59565753d86ccf1e56927b8 Trojan.Win64.WATERBEAR.ZTLC.enc
ca0423851ee2aa3013fe74666a965c2312e42d040dbfff86595eb530be3e963f Trojan.Win64.WATERBEAR.ZTLA
6dcc3af7c67403eaae3d5af2f057f0bb553d56ec746ff4cb7c03311e34343ebd Trojan.Win64.WATERBEAR.ZTLC.enc
ab8d60e121d6f121c250208987beb6b53d4000bc861e60b093cf5c389e8e7162 Trojan.Win64.WATERBEAR.ZTLB
a569df3c46f3816d006a40046dae0eb1bc3f9f1d4d3799703070390e195f6dd4 Trojan.Win64.WATERBEAR.ZTLC.enc
e483cae34eb1e246c3dd4552b2e71614d4df53dc0bac06076442ffc7ac2e06b2 Trojan.Win64.WATERBEAR.ZTLB
c97e8075466cf91623b1caa1747a6c5ee38c2d0341e0a3a2fa8fcf5a2e6ad3a6 Trojan.Win64.WATERBEAR.ZTLB
6b9a14d4d9230e038ffd9e1f5fd0d3065ff0a78b52ab338644462864740c2241 Trojan.Win64.WATERBEAR.ZTLB.enc
d665aea7899ad317baf1b6e662f40a10d42045865f9eea1ab18993b50dd8942d Trojan.Win64.DEUTERBEAR.ZTLC
dc60d8b1eff66bfb91573c8f825695e27b0813a9891bd0541d9ff6a3ae7e8cf2 Trojan.Win64.DEUTERBEAR.ZTLC.enc
4540132def6dfa6d181cabf1e1689bede5ecfef6450b033fecb0aeb1fe1b3fe9 Trojan.Win64.DEUTERBEAR.ZTLC
8f26069b6b49391f245b8551aa42ca4814c52e7f52d0343916f5262557bf5c52 Trojan.Win64.DEUTERBEAR.ZTLC.enc
74efa0ce94f4285404108d3d19bf2ff64c7c3a1c85e9b59cf511b56f9d71dc05 Trojan.Win64.DEUTERBEAR.ZTLC
d6ac4f364b25365eb4a5636beffc836243743ecf7ef4ec391252119aed924cab Trojan.Win64.DEUTERBEAR.ZTLC.enc
freeprous.bakhell[.]com:443
cloudflaread.quadrantbd[.]com:443
showgyella.quadrantbd[.]com:443
rscvmogt.taishanlaw[.]com:443
smartclouds.gelatosg[.]com:443
suitsvm003.rchitecture[.]org:443
cloudsrm.gelatosg[.]com:443
Secure your emails in a heartbeat! To find your ideal email security vendor, Take a Free 30-Second Assessment.
The post Earth Hundun’s Hackers Employ Waterbear And Deuterbear Tools For Advanced Cyber Attacks appeared first on Cyber Security News.
“}]] Read More
Cyber Security News
“Nudify” deepfake bots remove clothes from victims in minutes, and millions are using them
Millions of people are turning normal pictures into nude images, and it can be done in minutes.
Journalists at Wired found at least 50 “nudify” bots on Telegram that claim to create explicit photos or videos of people with only a couple of clicks. Combined, these bots have millions of monthly users. Although there is no sure way to find out how many unique users that are, it’s appalling, and highly likely there are much more than those they found.
The history of nonconsensual intimate image (NCII) abuse—as the use of explicit deepfakes without consent is often called—started near the end of 2017. Motherboard (now Vice) found an online video in which the face of Gal Gadot had been superimposed on an existing pornographic video to make it appear that the actress was engaged in the acts depicted. The username of the person that claimed to be responsible for this video resulted in the name “deepfake.”
Since then, deepfakes have gone through many developments. It all started with face swaps, where users put the face of one person onto the body of another person. Now, with the advancement of AI, more sophisticated methods like Generative Adversarial Networks (GANs) are available to the public.
However, most of the uncovered bots don’t use this advanced type of technology. Some of the bots on Telegram are “limited” to removing clothes from existing pictures, an extremely disturbing act for the victim.
These bots have become a lucrative source of income. The use of such a Telegram bot usually requires a certain number of “tokens” to create images. Of course, cybercriminals have also spotted opportunities in this emerging market and are operating non-functional or bots that render low-quality images.
Besides disturbing, the use of AI to generate explicit content is costly, there are no guarantees of privacy (as we saw the other day when AI Girlfriend was breached), and you can even end up getting infected with malware.
The creation and distribution of explicit nonconsensual deepfakes raises serious ethical issues around consent, privacy, and the objectification of women, let alone the creation of sexual child abuse material. Italian scientists found explicit nonconsensual deepfakes to be a new form of sexual violence, with potential long-term psychological and emotional impacts on victims.
To combat this type of sexual abuse there have been several initiatives:
The US has proposed legislation in the form of the Deepfake Accountability Act. Combined with the recent policy change by Telegram to hand over user details to law enforcement in cases where users are suspected of committing a crime, this could slow down the use of the bots, at least on Telegram.
Some platform policies (e.g. Google banned involuntary synthetic pornographic footage from search results).
However, so far these steps have shown no significant impact on the growth of the market for NCIIs.
We’re sometimes asked why it’s a problem to post pictures on social media that can be harvested to train AI models.
We have seen many cases where social media and other platforms have used the content of their users to train their AI. Some people have a tendency to shrug it off because they don’t see the dangers, but let us explain the possible problems.
Deepfakes: AI generated content, such as deepfakes, can be used to spread misinformation, damage your reputation or privacy, or defraud people you know.
Metadata: Users often forget that the images they upload to social media also contain metadata like, for example, where the photo was taken. This information could potentially be sold to third parties or used in ways the photographer didn’t intend.
Intellectual property. Never upload anything you didn’t create or own. Artists and photographers may feel their work is being exploited without proper compensation or attribution.
Bias: AI models trained on biased datasets can perpetuate and amplify societal biases.
Facial recognition: Although facial recognition is not the hot topic it once used to be, it still exists. And actions or statements done by your images (real or not) may be linked to your persona.
Memory: Once a picture is online, it is almost impossible to get it completely removed. It may continue to exist in caches, backups, and snapshots.
If you want to continue using social media platforms that is obviously your choice, but consider the above when uploading pictures of you, your loved ones, or even complete strangers.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.