LockBit, Babuk, and Hive ransomware used by Russian to target critical US organizations, DOJ says.
Related Posts
Facebook scrapes photos of kids from Australian user profiles to train its AI
Facebook scrapes photos of kids from Australian user profiles to train its AI
Facebook has admitted that it scrapes the public photos, posts and other data from the accounts of Australian adult users to train its AI models. Unlike citizens of the European Union (EU), Australians are not offered an opt-out option to refuse consent.
At an inquiry as to whether the social media giant was hoovering up the data of all Australians in order to build its generative artificial intelligence tools, senator Tony Sheldon asked whether Meta (Facebook’s owner) had used Australian posts from as far back as 2007 to feed its AI products.
At first Meta’s global privacy director Melinda Claybaugh denied this but senator David Shoebridge challenged her claim.
“The truth of the matter is that unless you have consciously set those posts to private since 2007, Meta has just decided that you will scrape all of the photos and all of the texts from every public post on Instagram or Facebook since 2007, unless there was a conscious decision to set them on private. That’s the reality, isn’t it?”
Claybaugh said yes, but she added that accounts of people under 18 were not scraped. However, when Senator Sheldon asked Claybaugh whether public photos of his children on his own account would be scraped, Claybaugh acknowledged they would.
When asked whether the company scraped data from previous years of users who were now adults, but were under 18 when they created their accounts, the question remained unanswered.
It is not new that Meta uses public Facebook and Instagram posts to train its AI, and Meta is not the only social media platform that does this. European privacy watchdogs accused X of unlawfully using personal data of 60 million+ users to train its AI Grok as well.
In June, the EU’s Data Protection Commission (DPC) reached an agreement with Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU. This decision followed intensive engagement between the DPC and Meta.
Australia recently revealed plans to set a minimum age limit for children to use social media, citing concerns around mental and physical health.
Prime Minister Anthony Albanese said his government would run an age verification trial before introducing age minimum laws for social media this year. The Prime Minister didn’t specify an age but said it would likely be between 14 and 16.
The reasoning behind the age limit had nothing to do with data scraping. He stated:
“I want to see kids off their devices and onto the footy fields and the swimming pools and the tennis courts. … We want them to have real experiences with real people because we know that social media is causing social harm.”
But nevertheless, the scraping could be a factor when the final decision about the age limit comes around.
What to do
Wherever you are in the world, we encourage you to think carefully about sharing photos of your kids online. Of course it’s lovely to post their photos for your friends and family to see, but once something is posted online you lose control about where that image is, and who has access to it.
If you really do want to share photos, lock your profile down as much as possible and keep your photos away from just anyone.
If you’re an adult and worried about image scraping, check the terms and conditions for accounts and see if you can opt-out. If there’s no option, carefully consider whether you want to post to that service at all.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
Using ChatGPT to cheat on assignments? New tool detects AI-generated text with amazing accuracy
Using ChatGPT to cheat on assignments? New tool detects AI-generated text with amazing accuracy
ChatGPT and similar Large language models (LLMs) can be used to write texts about any given subject, at any desired length at a speed unmatched by humans.
So it’s not a surprise that students have been using them to “help” write assignments, much to the dismay of teachers who prefer to receive original work from actual humans.
In fact, in Malwarebytes’ recent research survey, “Everyone’s afraid of the internet and no one’s sure what to do about it,” we found that 40% of people had used ChatGPT or similar to help complete assignments, while 1 in 5 admitted to using it to cheat on a school assignment.
It’s becoming really hard to tell what was written by an actual person, and what was written by tools like ChatGPT, and has led to students being falsely accused of using ChatGPT. However, students that are using those tools shouldn’t be receiving grades that they don’t deserve.
Worse than that could be an influx of so-called scientific articles that either add nothing new or bring “hallucination” to the table—where LLMs make up “facts” that are untrue.
Several programs that can filter out artificial intelligence (AI) texts have been created and tests are ongoing, but the success rate of these, mostly AI-based tools, hasn’t been great.
Many have found the existing detection tools not very effective, especially for professional academic writing. These tools have a bias against non-native speakers. Seven common web-based AI detection tools all identified non-native English writers’ works as AI-generated text more frequently than native English speakers’ writing.
But now it seems as if chemistry scientists have found an important building block in creating more effective detection tools. In a paper titled “Accurately detecting AI text when ChatGPT is told to write like a chemist” they describe how they developed and tested an accurate AI text detector for scientific journals.
Using machine learning (ML), the detector examines 20 features of writing style, including variation in sentence lengths, the frequency of certain words, and the use of punctuation marks, to determine whether an academic scientist or ChatGPT wrote the examined text.
To test the accuracy of the detector, the scientists tested it against 200 introductions in American Chemical Society (ACS) journal style. For 100 of these, the tool was provided with the papers’ titles, and for the other 100, it was given their abstracts.
It showed astonishing results. It outperformed the online tools provided by ZeroGPT and OpenAI by identifying ChatGPT-3.5 and ChatGPT-4 written sections based on titles with 100% accuracy. For the ChatGPT-generated introductions based on abstracts, the accuracy was slightly lower, at 98%.
Image courtesy of ScienceDirect
The graph shows the accuracy of three detectors against texts written by humans (to determine the number of false positives), ChatGPT-3.5, and ChatGPT-4. P1 is the texts based on titles and P2 the ones based on abstracts.
What’s important about this research is that it shows that with specialized tools one can achieve a much better detection rate. That could mean that efforts to develop AI detectors could receive a significant boost by tailoring software to specific types of writing.
Once we learn how to quickly and easily build such a specialized tool, we can soon expand the number of areas for which we have specialized detectors. According to one of the researchers, the findings show that “you could use a small set of features to get a high level of accuracy.”
To put this into perspective, the development time to generate the detector was a part-time project, done in approximately one month by a few people. The scientists designed the detector prior to the release of ChatGPT-4, but it works just as effectively on GPT-3.5, so it’s unlikely that future versions would create text in a way that would significantly change the accuracy of this detector.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
Malwarebytes
U.S. Sentences 31-Year-Old to 10 Years for Laundering $4.5M in Email Scams
U.S. Sentences 31-Year-Old to 10 Years for Laundering $4.5M in Email Scams
The U.S. Department of Justice (DoJ) has sentenced a 31-year-old to 10 years in prison for laundering more than $4.5 million through business email compromise (BEC) schemes and romance scams.
Malachi Mullings, 31, of Sandy Springs, Georgia pleaded guilty to the money laundering offenses in January 2023.
According to court documents, Mullings is said to have opened 20 bank accounts in the name of Read More