Researchers uncovered a new phishing email campaign that employs ChatGPT and Google Bard to launch sophisticated email attacks.
Threat actors had started relying on Artificial Intelligence since November 2022, when ChatGPT was released; many reports indicate threat actors using AIs to attack organizations.
There have been several attacks based and targeted on Artificial Intelligence recently. However, three main techniques threat actors use AIs have been analyzed.
Credential Phishing
Business Email Compromise (BEC)
Vendor Fraud
To combat such AI-based attacks, AI-based email security platform like Trustif protects your business emails by automatically disabling access to compromised accounts with AI-based account takeover protection.
Credential Phishing – Impersonation of Facebook for Phishing
Phishing emails have been a significant threat to every organization since most threat actors infiltrate a network using phishing campaigns. Furthermore, threat actors have been using AI-generated text for conducting phishing campaigns.
In one of the phishing emails, the threat actor was impersonating Facebook, stating that a community standard violation had made a Facebook page go unpublished. The email also consisted of a link, probably a phishing page created by threat actors to steal credentials.
On further analyzing the email, it was discovered that the email consisted of AI-generated text. This means threat actors have started using AIs like ChatGPT and Bard (by Google) to generate phishing email content that can appear more legitimate.
AI-generated text
Business Email Compromise – Payroll Diversion Scam
In this second scenario, an email imitated an employee of an organization mentioning that they needed to update the direct deposit information on their payroll. The email content appeared extremely convincing, with no grammatical or typo errors.
Nothing can be seen as dangerous in this email content, which would convince any person working on the payroll. However, this email content was also found to have been generated by AI. This creates a question from within on how safe we are from AI-based threat actors.
Abnormal Security stated, “Platforms including ChatGPT can be used to generate realistic and convincing phishing emails and dangerous malware, while tools like DeepFaceLab can create sophisticated deepfake content including manipulated video and audio recordings. And this is likely only the beginning.”
Vendor Fraud – Fraudulent Invoice
This third scenario can also be called a Vendor email Compromise (VEC) attack. It is considered one of the most successful social engineering attacks since they show no dangerous indications to the vendors or customers.
Nevertheless, a recent email analysis showed the perfect impersonation of an attorney asking for an outstanding invoice. Like the previous attacks, this email content had no grammatical or typo errors.
Another interesting fact is that the person imitated in the email was an existing person working in a law firm.
Outstanding invoice phishing email
People with little security knowledge would never suspect this kind of email as they look incredibly legitimate with the content and just as expected. This makes it extremely hard for organizations to filter out phishing emails from legitimate emails.
With the evolution of sound technology, cybercrimes are also evolving and becoming much more sophisticated for everyone. It is high time they consider understanding the advantages and disadvantages of AIs before they get out of our hands.
Stop Advanced Email Threats That Target Your Business Email – Try AI-Powered Email Security
The post Hackers Using ChatGPT & GoogleBard to Launch Sophisticaed Email Attacks appeared first on Cyber Security News.
Cyber Security News