New options allow paid Zoom customers to specify certain data for meetings, webinars, and team chat to be stored within the EEA.
The post Zoom Expands Privacy Options for European Customers appeared first on SecurityWeek.
The all in one place for non-profit security aid.
New options allow paid Zoom customers to specify certain data for meetings, webinars, and team chat to be stored within the EEA.
The post Zoom Expands Privacy Options for European Customers appeared first on SecurityWeek.
Atlassian Confluence Hit by New Actively Exploited Zero-Day – Patch Now
Atlassian has released fixes to contain an actively exploited critical zero-day flaw impacting publicly accessible Confluence Data Center and Server instances.
The vulnerability, tracked as CVE-2023-22515, is remotely exploitable and allows external attackers to create unauthorized Confluence administrator accounts and access Confluence servers.
It does not impact Confluence versions prior to Read More
The Hacker News | #1 Trusted Cybersecurity News Site
UK Researchers Find AI Chatbots Highly Vulnerable to Jailbreaks
Advanced AI Safety Institute (AISI) researchers have recently discovered substantial vulnerabilities in popular AI chatbots, indicating that these systems are highly susceptible to “jailbreak” attacks.
The findings, published in AISI’s May update, highlight the potential risks advanced AI systems pose when exploited for malicious purposes.
The study evaluated five large language models (LLMs) from major AI labs, anonymized as the Red, Purple, Green, Blue, and Yellow models.
ANYRUN malware sandbox’s 8th Birthday Special Offer: Grab 6 Months of Free Service
These models, which are already in public use, were subjected to a series of tests to assess their compliance with harmful questions under attack conditions.
Compliance Rates of AI Models Under Attack
Figure 1 illustrates the compliance rates of the five models when subjected to jailbreak attacks. The Green model showed the highest compliance rate, with up to 28% of harmful questions being answered correctly under attack conditions.
The researchers employed a variety of techniques to evaluate the models’ responses to over 600 private, expert-written questions. These questions were designed to test the models’ knowledge and skills in areas relevant to security, such as cyber-attacks, chemistry, and biology. The evaluation process included:
Task Prompts: Models were given specific questions or tasks to perform.
Scaffold Tools: For certain tasks, models had access to external tools, such as a Python interpreter, to write executable code.
Response Measurement: Responses were graded using both automated approaches and human evaluators.
The study found that while the models generally provided correct and compliant information in the absence of attacks, their compliance rates with harmful questions increased significantly under attack conditions. This raises concerns about the potential misuse of AI systems in various harmful scenarios, including:
Cyber Attacks: AI models could be used to inform users about cyber security exploits or autonomously attack critical infrastructure.
Chemical and Biological Knowledge: Advanced AI could provide detailed information that could be used for both positive and harmful purposes in chemistry and biology.
Potential Risks of AI Misuse
Figure 2 outlines the potential risks associated with the misuse of AI systems, emphasizing the need for robust safety measures.
The AISI’s findings underscore the importance of continuous evaluation and improvement of AI safety protocols. The researchers recommend the following measures to mitigate the risks:
Enhanced Security Protocols: Implementing stricter security measures to prevent jailbreak attacks.
Regular Audits: Conducting periodic evaluations of AI systems to identify and address vulnerabilities.
Public Awareness: Educating users about the potential risks and safe usage of AI technologies.
As AI continues to evolve, ensuring the safety and security of these systems remains a critical priority. The AISI’s study serves as a crucial reminder of the ongoing challenges and the need for vigilance in the development and deployment of advanced AI technologies.
Free Webinar on Live API Attack Simulation: Book Your Seat | Start protecting your APIs from hackers
The post UK Researchers Find AI Chatbots Highly Vulnerable to Jailbreaks appeared first on Cyber Security News.
Linux Kernel Vulnerability (CVE-2024-26925) Let Hackers Access Unauthorized Data
[[{“value”:”
In a significant update from the Linux kernel’s security team, a critical vulnerability identified as CVE-2024-26925 has been addressed to bolster the security of systems worldwide.
The flaw was found in the netfilter subsystem, specifically within the nf_tables component, which is crucial for packet filtering and classification.
The vulnerability stemmed from improperly releasing a mutex within the garbage collection (GC) sequence of nf_tables.
Typically, the commit mutex should remain locked during the critical section between nft_gc_seq_begin() and nft_gc_seq_end() to prevent asynchronous GC workers from collecting expired objects and acquiring the released commit lock within the same GC sequence.
However, it was discovered that nf_tables_module_autoload() was temporarily releasing the mutex to load module dependencies, then reacquiring it to replay the transaction. This incorrect handling could potentially lead to race conditions, jeopardizing the stability and security of the Linux kernel.
The issue was rectified by modifying the mutex release sequence. Now, the mutex release occurs at the end of the abort phase after nft_gc_seq_end() is called, ensuring that GC workers protect the critical section from concurrent access.
Greg Kroah-Hartman, a renowned kernel maintainer, committed this change to the Linux kernel source under the patch identifier CVE-2024-26925.
In the commit message, Greg Kroah-Hartman, explained, “The commit mutex should not be released during the critical section between nft_gc_seq_begin() and nft_gc_seq_end(). Otherwise, the async GC worker could collect expired objects and get the released commit lock within the same GC sequence.”
The vulnerability could affect many systems, particularly those utilizing the nf_tables for network packet filtering.
By resolving this issue, the Linux kernel developers have prevented possible exploits that could lead to system crashes or unauthorized data access.
Is Your Network Under Attack? – Read CISO’s Guide to Avoiding the Next Breach – Download Free Guide
The Linux kernel CVE team strongly advises users to update to the latest stable kernel version, which includes this patch among other bug fixes. The team emphasizes that individual changes are not tested in isolation but as part of the entire kernel release.
Therefore, cherry-picking individual commits is discouraged and unsupported.
For the most current information regarding which kernel versions remain unaffected as fixes are backported, users are encouraged to consult the official CVE entry at CVE-2024-26925 on cve.org.
This proactive patching underscores the Linux community’s commitment to security and stability. Users and administrators are urged to apply the latest updates to safeguard their systems against potential threats stemming from this vulnerability.
Combat Sophisticated Email Threats With AI-Powered Email Security Tool -> Try Free Demo
The post Linux Kernel Vulnerability (CVE-2024-26925) Let Hackers Access Unauthorized Data appeared first on Cyber Security News.
“}]] Read More
Cyber Security News