Malicious Actors Still Limit the Use of Generative AI
It is feared that malicious actors will take advantage of generative AI to support their malicious pursuits; nevertheless, the use of generative AI by malicious actors seems to be minimal, definitely for attack campaigns. Mandiant states that it is monitoring the interest of threat actors in generative AI, however, its research and open-source information reveal generative AI is just being used at this time
to a considerable level for social engineering and misinformation strategies.
Mandiant has discovered proof showing generative AI is being employed to make baits for business email compromise (BEC) attacks and phishing attacks. Malicious actors could produce text output showing natural human speech styles for phishing baits and boost the difficulty of language in their active campaigns. Threat actors have employed generative AI to control videos and voice content in BEC attacks and to change pictures to eliminate know-your-customer (KYC) prerequisites. The proof is also acquired showing that financially driven threat actors are utilizing the malicious WormGPT software to make convincing baits for phishing and BEC attacks.
Mandiant has earlier shown how malicious actors are able to use AI-based resources to aid their campaigns, for example for processing free data and stolen information for reconnaissance reasons. For example, state-sponsored intelligence services could utilize machine learning and data science resources on substantial amounts of stolen and freely available information to enhance data processing and evaluation, enhancing the speed and effectiveness of operationalizing gathered data. In 2016, a system was found that could discover high-value targets from prior Twitter activity and produce effective baits targeting persons based on previous tweets. Mandiant has additionally discovered that a North Korean cyber espionage actor (APT43) likes large language models (LLMs) and is utilizing LLM tools, though it is not yet sure why the LLMs are being utilized.
Presently, one efficient use of generative AI is for data processing. AI tools aid data operation actors with restricted sources and abilities to create good quality content, and the tools boost their capability to produce content that could have a better persuasive impact on their specific audiences than was earlier possible. AI-created images and videos are probably to be used soon; and although it is not yet seen in operations utilizing LLMs, their possible uses could bring about their quick adoption.
Although there is little proof of threat actors exploiting LLMs to make new malware and strengthen current malware, this is something that is probable to see substantial development. Mandiant states that a number of threat actors are marketing services on forums on how to avoid prohibitions on LLMs to use them with malware improvement.
Chief Analyst John Hultquist of Mandiant Intelligence states that the usage of generative AI, although already being used by some groups, adoption remains minimal and mainly centered on social engineering. Criminals as well as state actors will find this technology valuable, however, a lot of estimates of how this device will be employed are risky and not based on observation.
Although threat actors are anticipated to considerably utilize generative AI for offensive reasons, AI-based resources presently give many more advantages to defenders. AI existed for some time now, yet this is the inflection point noticed by the general public. Just like any technological development, adversaries will discover uses for these tools. Nevertheless, there is much more promise for those who can develop it.
Hacking of 1,900 Citrix NetScaler Devices
Hackers are carrying out a mass exploitation strategy focusing on Citrix NetScalers to take advantage of a critical vulnerability monitored as CVE-2023-3519. The programmed exploitation campaign breaches NetScalers and puts web shells to have a continual backdoor into systems. The web shell enables the threat actor to implement arbitrary commands on breached systems, even if the patch is used to resolve the vulnerability.
Citrix revealed on July 18, 2023 that the vulnerability impacts Citrix Application Delivery Controller and Gateway appliances that are set up as portal servers. A patch was produced to correct the vulnerability and Citrix cautioned at that time that there was minimal exploitation of the flaw in the wild, though no specifics were revealed concerning the magnitude of the breach. From then on, a number of security companies have reported instances of vulnerability exploitation.
Experts at the cybersecurity firm Fox-IT, in cooperation with the Dutch Institute of Vulnerability Disclosure (DIVD), are attempting to determine the breached systems and notify the impacted organizations. The researchers stated that during the exploitation strategy, 31,127 NetScalers were discovered to be prone to the CVE-2023-3519 vulnerability and since August 14, 2023, 1,900 NetScalers were found to have been breached and backdoored. 1,248 NetScalers were patched to resolve the vulnerability, and although patched, access remained possible using the web shell.
The researchers have cautioned NetScaler admin to check the Indicators of Compromise (IoCs), irrespective of whether the flaw has been resolved. The Fox-IT researchers have introduced a Python script that utilizes Dissect to carry out triage on forensic pictures of NetScalers, and Mandiant has introduced a bash script that is going to look for IoCs on live systems.
In case a web shell is discovered, the researchers suggest creating a forensic disk copy and copy the memory of the appliance before doing any remediation or investigation and looking into whether the web shell was employed to do any actions. Using the web shell ought to be obvious in NetScaler access records. When there are signs that the web shell is utilized, more analysis is necessary to find out whether the attackers went laterally from the appliance and have breached other networks.