Los Angeles startup Galvanick scores $10 million seed capital to build a modern industrial detection and response platform.
The post Galvanick Banks $10 Million for Industrial XDR Technology appeared first on SecurityWeek.
The all in one place for non-profit security aid.
Los Angeles startup Galvanick scores $10 million seed capital to build a modern industrial detection and response platform.
The post Galvanick Banks $10 Million for Industrial XDR Technology appeared first on SecurityWeek.
Researchers Link DragonEgg Android Spyware to LightSpy iOS Surveillanceware
New findings have identified connections between an Android spyware called DragonEgg and another sophisticated modular iOS surveillanceware tool named LightSpy.
DragonEgg, alongside WyrmSpy (aka AndroidControl), was first disclosed by Lookout in July 2023 as a strain of malware capable of gathering sensitive data from Android devices. It was attributed to the Chinese nation-state group APT41.
On Read More
The Hacker News | #1 Trusted Cybersecurity News Site
“Nudify” deepfake bots remove clothes from victims in minutes, and millions are using them
Millions of people are turning normal pictures into nude images, and it can be done in minutes.
Journalists at Wired found at least 50 “nudify” bots on Telegram that claim to create explicit photos or videos of people with only a couple of clicks. Combined, these bots have millions of monthly users. Although there is no sure way to find out how many unique users that are, it’s appalling, and highly likely there are much more than those they found.
The history of nonconsensual intimate image (NCII) abuse—as the use of explicit deepfakes without consent is often called—started near the end of 2017. Motherboard (now Vice) found an online video in which the face of Gal Gadot had been superimposed on an existing pornographic video to make it appear that the actress was engaged in the acts depicted. The username of the person that claimed to be responsible for this video resulted in the name “deepfake.”
Since then, deepfakes have gone through many developments. It all started with face swaps, where users put the face of one person onto the body of another person. Now, with the advancement of AI, more sophisticated methods like Generative Adversarial Networks (GANs) are available to the public.
However, most of the uncovered bots don’t use this advanced type of technology. Some of the bots on Telegram are “limited” to removing clothes from existing pictures, an extremely disturbing act for the victim.
These bots have become a lucrative source of income. The use of such a Telegram bot usually requires a certain number of “tokens” to create images. Of course, cybercriminals have also spotted opportunities in this emerging market and are operating non-functional or bots that render low-quality images.
Besides disturbing, the use of AI to generate explicit content is costly, there are no guarantees of privacy (as we saw the other day when AI Girlfriend was breached), and you can even end up getting infected with malware.
The creation and distribution of explicit nonconsensual deepfakes raises serious ethical issues around consent, privacy, and the objectification of women, let alone the creation of sexual child abuse material. Italian scientists found explicit nonconsensual deepfakes to be a new form of sexual violence, with potential long-term psychological and emotional impacts on victims.
To combat this type of sexual abuse there have been several initiatives:
The US has proposed legislation in the form of the Deepfake Accountability Act. Combined with the recent policy change by Telegram to hand over user details to law enforcement in cases where users are suspected of committing a crime, this could slow down the use of the bots, at least on Telegram.
Some platform policies (e.g. Google banned involuntary synthetic pornographic footage from search results).
However, so far these steps have shown no significant impact on the growth of the market for NCIIs.
We’re sometimes asked why it’s a problem to post pictures on social media that can be harvested to train AI models.
We have seen many cases where social media and other platforms have used the content of their users to train their AI. Some people have a tendency to shrug it off because they don’t see the dangers, but let us explain the possible problems.
Deepfakes: AI generated content, such as deepfakes, can be used to spread misinformation, damage your reputation or privacy, or defraud people you know.
Metadata: Users often forget that the images they upload to social media also contain metadata like, for example, where the photo was taken. This information could potentially be sold to third parties or used in ways the photographer didn’t intend.
Intellectual property. Never upload anything you didn’t create or own. Artists and photographers may feel their work is being exploited without proper compensation or attribution.
Bias: AI models trained on biased datasets can perpetuate and amplify societal biases.
Facial recognition: Although facial recognition is not the hot topic it once used to be, it still exists. And actions or statements done by your images (real or not) may be linked to your persona.
Memory: Once a picture is online, it is almost impossible to get it completely removed. It may continue to exist in caches, backups, and snapshots.
If you want to continue using social media platforms that is obviously your choice, but consider the above when uploading pictures of you, your loved ones, or even complete strangers.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
Social Engineering Paves the Way for the XZ Cyber Incident
[[{“value”:”
The XZ cyber incident is a textbook example of how sophisticated social engineering tactics can lead to significant security breaches.
Over the course of two years, a carefully planned attack was executed against the popular XZ Utils open-source project.
The attackers went to great lengths to ensure their plan was executed flawlessly, ultimately culminating in the successful insertion of a backdoor in early 2024. This breach had far-reaching consequences that affected countless project users.
The attackers believed to be using fake identities, worked on a long-term infiltration strategy for the XZ Utils project.
One of the central figures in this operation was Jia Cheong Tan(JiaT75), a likely pseudonymous entity who played a pivotal role in executing the attack.
Is Your Network Under Attack? – Read CISO’s Guide to Avoiding the Next Breach – Download Free Guide
Kaspersky has recently released an in-depth analysis of an incident that was primarily executed through social engineering techniques.
The report provides comprehensive details and insights into the incident, shedding light on the intricacies and nuances of social engineering as an attack vector.
The social engineering aspect of this incident was not only elaborate but also highlighted a significant vulnerability in the trust-based model of open-source projects.
The initial phase of the attack involved benign contributions to the project, which served dual purposes: to mask the attackers’ malicious intentions and to build a reputation within the community as trustworthy developers.
The security researcher Alden from Huntress has been analyzing Jai Tan’s commit history over a period of time.
Interesting note on the #xz backdoor:
If you plot Jai Tan’s commit history over time, the cluster of offending commits occurs at an unusual time compared to rest of their activity.
If the dev was pwned, it could be a sign that the threat actor contributed in their own timezone pic.twitter.com/CrFBcdIAni
— alden (@birchb0y) March 30, 2024
The plot indicates that the cluster of offending commits happened at unusual times.
Between February 23-26 and March 8-9, 2024, JiaT75 uploaded malicious code unrelated to their prior work times.
It is suspected that a second party used the JiaT75 account to insert the malicious code, but it is unclear whether the contributor was aware of this.
The individual contributor behind the JiaT75 account may have been under pressure to commit the malicious backdoor code quickly.
A team managed the JiaT75 account, and one part needed to work beyond usual hours without interruptions.
As these contributions continued, the attackers engaged in strategic social interactions with key community members, gradually ingratiating themselves within the community.
Document
Are you from SOC, Threat Research, or DFIR departments? If so, you can join an online community of 400,000 independent security researchers:
Real-time Detection
Interactive Malware Analysis
Easy to Learn by New Security Team members
Get detailed reports with maximum data
Set Up Virtual Machine in Linux & all Windows OS Versions
Interact with Malware Safely
If you want to test all these features now with completely free access to the sandbox:
By leveraging the open-source community’s reliance on mutual trust and collaborative development, the attackers positioned themselves as integral contributors to the XZ Utils project.
Over time, they expanded their roles within the project, advocating for additional maintainer roles under the pretext of enhancing the project.
This strategic placement allowed them unfettered access to the project’s codebase, setting the stage for the next phase of their plan.
In early 2024, the attackers executed the final phase of their strategy by inserting malicious code into the XZ Utils build process.
This code was designed to implement an exclusive use backdoor in sshd, a critical component of many Linux distributions.
The backdoor code was pushed to major Linux distributions as part of a large-scale supply chain attack, aiming to compromise millions of systems globally.
The malicious code’s subtlety in insertion, leveraging the build process in plain sight, was a testament to the attackers’ technical acumen and deep understanding of the open-source development ecosystem.
The social engineering tactics employed were not just about deceiving individuals; they were about exploiting the dynamics of community trust and collaboration, which are foundational to open-source projects.
Thanks to the vigilance of Andres Freund, a developer at Microsoft, this backdoor was discovered, preventing what could have been one of the most significant security breaches in recent history.
Freund’s investigation began when he noticed unusual activity in the SSH daemon, which led him to uncover the backdoor embedded within the XZ Utils.
As the cybersecurity community continues to analyze and learn from the XZ incident, it is clear that the battle against cyber threats is not just about technological defenses but also about understanding and mitigating the human and social factors that can often be the weakest links in security.
Combat Email Threats with Easy-to-Launch Phishing Simulations: Email Security Awareness Training -> Try Free Demo
The post Social Engineering Paves the Way for the XZ Cyber Incident appeared first on Cyber Security News.
“}]] Read More
Cyber Security News