According to a recent report by cyber security firm Darktrace, social engineering attacks leveraging generative AI technology have skyrocketed by 135%. AI is found to be used to hack passwords, leak sensitive information, and scam users across various platforms. Cybercriminals are now turning to advanced AI platforms such as ChatGPT and Midjourney to make their malicious campaigns more believable. This makes it difficult for users to distinguish between legitimate communications and well-crafted scams.
Also Read: The Dark Side of AI Innovation: ChatGPT Bug Exposes User Payment Data
The evolving nature of social engineering attacks has led to a surge in concern among employees. A staggering 82% of them expressed their fears about the realism of scams and the likelihood that more users could fall prey to these deceptions. Gone are the days when poor English was a clear red flag, as ChatGPT has significantly eliminated such apparent indicators.
The combination of advanced linguistic complexity and the ease of access to generative AI platforms creates a perfect storm for cybercriminals looking to exploit unsuspecting individuals. With social engineering attacks becoming increasingly harder to detect, users must remain vigilant to safeguard their personal information.
Also Read: Navigating Privacy Concerns: The ChatGPT User Chat Titles Leak Explained
One telltale sign of a potential scam or phishing attempt is an email requesting users to click on a link and enter login details. Users should be cautious about such emails and verify their authenticity before providing sensitive information.
A recent study from Home Security Heroes revealed that 51% of common passwords could be cracked within a minute with the help of AI. The accuracy of AI-driven password cracking can increase to 81% in less than a month. The study used PassGAN, an AI password cracker, to test the duration it takes for AI to crack passwords from the RockYou dataset.
To combat the growing threat of AI-powered social engineering attacks, users must be increasingly vigilant and adopt better security practices. These practices include strong password creation and regular updates. As AI technology continues to evolve, it becomes more critical for individuals to stay informed and protect themselves from potential cyber security threats.
Also Read: Data Security in the AI Era | Expert Opinion
To protect against AI-driven social engineering attacks and maintain a high level of personal cyber security, individuals should adopt a multi-layered approach that includes the following best practices:
Utilize strong, unique passwords for all accounts and avoid using common words, phrases, or patterns. Include a mix of uppercase and lowercase letters, numbers, and special characters. Password managers can be valuable tools for generating and storing secure passwords.
Whenever possible, enable two-factor authentication (2FA) for online accounts. This adds an extra layer of security. As it requires both a password and a verification code typically sent to a user’s mobile device.
Do not click on links or download attachments from unexpected or unfamiliar sources. Exercise caution when receiving emails or messages requesting personal information, even if they appear to be from legitimate organizations.
Regularly update all software, including operating systems, antivirus programs, and applications. This helps to ensure that the latest security patches and features are installed.
Stay informed about the latest cyber security threats and trends. Share this knowledge with friends, family, and colleagues to help them understand the risks and best practices for staying safe online.
In conclusion, the rapid advancements in generative AI technology present opportunities and risks. As the threat of AI-driven social engineering attacks continues to grow, individuals and organizations must stay informed, exercise caution, and employ robust cyber security measures to stay one step ahead of malicious actors.
Also Read: AI “Could Be” Dangerous – Joe Biden