
In this blog we explore how AI is being used by cybercriminals to accelerate cyberattacks and by organizations to help spot and identify threats sooner.
AI and cybersecurity
What is Artificial Intelligence’s effect on the cybersecurity landscape? The introduction of AI has played into the hands of the cybercriminals who are looking for tools that not only help to increase the sophistication of attacks, but also lower the barrier to entry. But it’s not all bad news, in fact according to a survey conducted by Ponemon Institute, 44 percent of organizations surveyed can confidently identify ways AI could strengthen their cybersecurity posture. Later in the blog we will talk about the ways in which AI is helping organizations fight back.
A judgement from the 2024 National Cyber Security Centre (NCSC) report states that “AI will almost certainly increase the volume and heighten the impact of cyberattacks over the next two years.” The NCSC also notes that AI will provide cybercriminals with a capability uplift in reconnaissance and social engineering, making cyberattacks more effective, efficient, and harder to detect.
AI is lowering the barrier for novice cybercriminals to carry out effective and damaging cyberattacks, with the dark web now home to a breeding ground of AI-powered tools designed to increase the sophistication of cyber threats. Tools that have been monitored on the dark web and used by cybercriminals include cracked versions of ChatGPT, which can bypass safeguards put in place. These cracked versions can write and “improve” malware code, so for someone that has no code experience suddenly they can now use AI to help perform attacks.
What kind of emerging threats are potentially using AI?
The emerging threats of AI
Social engineering and AI
One particular type of social engineering has been improved and developed through the use of AI, which involves phishing. Social engineering aims to manipulate human behavior with the aim of sharing sensitive data, passwords, or transferring money.
Social engineering attacks leverage AI to assist in the research, creativity, and execution of the attack. Cybercriminals will also use AI to identify an ideal target (both organization and person) who they know will be able to serve as a gateway to an organization’s IT environment, and develop a persona and corresponding online presence to carry out communication with the victim. AI will also help cybercriminals to develop a realistic and plausible scenario that would generate the attention needed to cause enough damage.
Phishing attacks and AI
Phishing attacks occur when cybercriminals trick their victims into sharing personal information, such as passwords, banking information, or other sensitive information, by posing as someone the victim trusts. AI has made it easier for cybercriminals to carry out phishing attacks by helping threat actors to write believable messages that create a sense of urgency for action. The introduction of AI has also meant there has been an explosion of deep fakes targeting people who think they are listening to a message or seeing a video from someone they know.
A 2024 study by Keeper saw 51 percent of IT leaders admitting they are now witnessing AI-powered cyberattacks, and 36 percent noting the use of deepfake technology.
We have already mentioned that AI tools such as ChatGPT are being used to help cybercriminals check code for errors, but the AI tool is also being used to create phishing messages. Gone are the days of being able to pick out a phishing email by the bad spelling and grammar being used, emails being deployed are more realistic than ever.
AI and chat bots
Another way cybercriminals are using AI to accelerate their cyber attacks is the use of chat bots. Hackers are now using AI to deploy chatbots on social media platforms and messaging apps to spread misinformation, distribute malware, and engage in fraudulent activities. These sophisticated bots can automatically reply to users, share malicious links, and even engage in political or financial scams at scale.
AI chatbots combined with deepfake technology and voice cloning tools create even more dangerous threats. Attackers can generate realistic voice-based interactions that trick employees into authorizing transactions, changing passwords, or revealing sensitive information
AI and the Automation of Cyber Attacks
As well as helping with the creativeness and language used in some cyberattacks, AI can also automate many aspects of cyberattacks. From scanning for vulnerabilities to launching attacks on a large scale, hackers can deploy AI bots to scan websites and networks, identifying vulnerabilities that can be exploited. This helps to dramatically reduce the time and effort needed by cybercriminals to launch cyberattacks.
AI can also help attackers automate the process once a vulnerability is found, which will help to achieve faster and more effective results.
Using AI to prevent cyberattacks
While traditional cybersecurity methods go a way to protecting organizations from cyberattacks and helps to instill a proactive approach, cybercriminals aren’t the only ones using AI. Dark web monitoring and investigation tools can encompass and use AI to go one step further and take the fight to cybercriminals on the dark web. How can AI be used against cybercriminals to mitigate the threat of attack?
Automated data collection
Content on the dark web can appear and disappear in a very short space of time, making manual monitoring neither practical nor sufficient to gain insights from.
Automated data collection is a critical feature of effective dark web protection tools, allowing organizations to continuously gather intelligence and maintain a robust understanding of potential threats. This capability ensures no stone is left unturned, providing comprehensive coverage and actionable insights into malicious activity.
Automated data collection gives organizations:
- Comprehensive coverage: Automated data collection ensures no potential threat goes unnoticed, enabling businesses to identify risks early and minimize blind spots that AI-powered threats may produce.
- Real-time monitoring: Daily monitoring is no longer an option. With automated tools in place, threats are flagged as they emerge, rather than relying on delayed, manual discovery.
- Efficiency: By automating data collection, security teams save time and resources, allowing them to focus on high-priority issues rather than sifting through raw data.
Extracting key dark web insights
Trying to decipher and find insights on the dark web can be difficult. It can mean sifting through hundreds of forum posts and Telegram messages, often containing dark web slang and acronyms, which can be laborious and take time for investigators. But dark web investigation and threat intelligence tools that encompass AI can help reduce the time it takes to get the intelligence an organization is looking for.
While organizations could just replace their analysts for an AI counterpart, this can lead to inaccurate or misleading results as a result of AI’s propensity for hallucinations. To keep an investigation’s integrity, organizations should invest in dark web intelligence tools that use AI to enhance and augment their current investigators’ capabilities. AI models in tools such as Searchlight Cyber’s extract and highlight critical insights and topics, providing an at-a-glance overview. AI enrichments will also analyze the sentiment of conversations, helping investigators quickly understand the tone and potential threat level of both entire threads and individual posts.
Going from what could have taken hours to analyze, AI insights will deliver accurate summaries, post categorization, identify trends from complex message chains and deliver quick context on what is being discussed in closed groups to easily identify which threads require further investigation. This means investigators can gain insights 140x faster than manual analysis. For example, a 200-message thread that previously took 45 minutes to read can now be summarized and understood in just under a minute, delivering a 99% time-saving.
Ultimately, the use of AI to extract dark web insights from forums takes complex and unstructured data and turns it into actionable structured data that allows investigators to report on criminal activity and focus on threats related to their organization.
AI-powered language translation
Cybercriminals communicate and conduct activities in multiple languages, so threats to your business may not originate from the same country as your organization but from halfway across the world. AI-powered language translation becomes an invaluable component of dark web protection tools, enabling businesses to identify, analyze, and respond to threats regardless of language barriers.
The dark web is dominated by several key languages, with the top 10 most commonly used being:
- English
- Russian
- German
- French
- Spanish
- Bulgarian
- Indonesian
- Turkish
- Italian
- Dutch
Without the ability to translate and analyze these languages, businesses risk missing critical intelligence about threats targeting their operations.
Combat the threat of AI-powered attacks
Organizations should implement robust dark web monitoring to combat AI-powered dark web threats. These tools should make use of, and leverage, AI-power and automation to help detect and respond to emerging threats, including identifying stolen data, compromised credentials, and malicious activities. Dark web monitoring helps businesses stay one step ahead of cybercriminals and take necessary measures to safeguard their confidential information.