Shield Your Business From Emerging AI-Powered Cyberthreats

Shield Your Business From Emerging AI-Powered Cyberthreats | Property & Casualty

Artificial intelligence (AI) has skyrocketed in popularity, captivating the interest of both individuals and organizations. While AI technology can certainly offer benefits, it also has the potential to become weaponized by cybercriminals. Discover how cybercriminals are exploiting AI technology and ways to protect your organization from its malicious use.

Cybercriminals Harnessing the Power of AI

Creating & Spreading Malware

Cybercriminals have traditionally utilized sophisticated technical skills to create malicious code and deploy malware attacks. However, AI chatbots have changed the landscape by quickly generating illicit codes. Cybercriminals with varying levels of technical expertise can now easily launch malware incidents. Other examples of cybercriminals exploiting AI for malware include:

  • Generating misleading videos, masquerading as tutorials for downloading popular software versions. Unknowingly, viewers of such videos fall victim to malware which infects their devices.
  • Streamlining and automating different phases of an attack, ranging from information gathering to vulnerability exploitation and execution. This capability empowers cybercriminals to quickly launch malware on a significant scale.
  • Assisting cyberattackers to identify potential targets by evaluating factors such as financial resources and potential consequences. The technology can help to personalize ransom demands based on the victim.

Utilizing AI in Cracking Credentials

Cybercriminals often employ brute-force methods to expose password information, enabling them to access victims' accounts. This type of attack involves manipulating AI to systematically test various word combinations to crack passwords. It’s important to note that the effectiveness and efficiency of these techniques can vary. Recent research indicates that AI can decipher frequently utilized passwords in under 10 minutes.

Deceiving Social Engineering Scams

Social engineering occurs when cybercriminals utilize dishonest communication (e.g., emails, text, phone calls) to manipulate targets into unintentionally revealing confidential information or downloading malicious software. The emergence of AI technology has the potential to cause a surge in these fraudulent activities as it empowers cybercriminals to create phishing emails and messages with enhanced credibility. Additionally, deep-fake technology has the potential to generate persuasive audio and video content featuring individuals that could result in identity theft or other deceptive practices.

The prevalence of vishing scams perpetrated by cybercriminals is growing, with these fraudulent activities taking place via telephone calls. Targets are contacted by the attacker who pretends to be someone the victim knows (e.g., colleague, family, friend). It’s also crucial to remain vigilant for deepfake manipulation where AI is employed to modify an individual’s facial or physical features with the intent to present themselves as someone else. Perpetrators employ this deceptive technique to disseminate misinformation. Worse yet, it’s challenging for law enforcement officials to detect.

Discovering Your Organization's Digital Vulnerabilities

Cybercriminals typically seek out software vulnerabilities they can leverage (e.g., unpatched code, outdated security) when infiltrating a target’s network or system. While there are different tools available to identify weaknesses, the use of AI technology allows cybercriminals to uncover a broader range of software exposures, thereby creating more opportunities and openings for launching attacks.

Assisting in Analyzing Stolen Data

After obtaining sensitive information and confidential records from targets, cybercriminals carefully examine this data to strategize how to exploit this information most efficiently and effectively. Their subsequent actions may involve selling the information on the dark web, publicly disclosing or demanding ransom payments. This process can be time-consuming, particularly when dealing with extensive databases. However, the application of AI technology enables cybercriminals to expedite the data analysis, enabling them to make quick decisions and accelerate the overall attack execution. Consequently, targeted individuals or organizations will have a reduced timeframe to detect and counter these attacks.

Defending Against Weaponized AI Technology

It’s expected that AI technology will lead to an increase in both the frequency and severity of cyberattacks. To safeguard operations and mitigate cyberthreats, organizations should strive to stay updated on the latest AI-related advancements and take proactive measures. Consider the following strategies to mitigate the risk of cyberattacks and related losses from weaponized AI technology:

Practice Good Cyber Hygiene

Implement standardized routines to ensure the secure handling of crucial workplace data and interconnected devices. These practices are crucial for safeguarding networks and data against potential cyberthreats propelled by AI. Prioritize essential elements of cyber hygiene, such as:

  • Immediately patching known vulnerabilities
  • Discontinuing the use of unsupported software/systems
  • Enforcing the use of robust passwords, which should consist of a minimum of 12 characters and a combination of uppercase and lowercase letters, symbols, and numbers. The implementation of multifactor authentication (MFA) should be extended enterprise wide.
  • Ensuring the routine (e.g., daily) preservation of crucial business data by backing it up in a dedicated and secure location (e.g., air-gapped external hard drive, the cloud).
  • Implementing firewalls, antivirus software and other relevant security programs on workplace networks and systems.
  • Employing a regular cybersecurity training program for employees to educate them on the most recent digital vulnerabilities, attack prevention measures and protocols for responding to incidents.

Engage in Network Monitoring

This type of monitoring pertains to organizations that employ automated threat detection technology to conduct ongoing scans of their digital ecosystems for potential vulnerabilities or suspicious activities. Such technology typically generates alerts upon the detection of security issues, enabling businesses to promptly identify and address incidents. Given the time-sensitive nature of addressing AI-related threats, network monitoring is a vital practice.

Have a Strategy

Developing cyber incident response plans can assist organizations in establishing procedures that can effectively mitigate damages when faced with cyberattacks. It’s crucial for these plans to be thoroughly documented and regularly practiced, encompassing a range of potential cyberattack scenarios, including those originating from AI technology.

Secure Proper Insurance Protection

Organizations must ensure they have appropriate insurance coverage to safeguard against potential financial losses resulting from the weaponizing of AI technology. Seek guidance from a trusted insurance broker to assess and discuss your specific coverage needs. Selecting a broker and cyber carrier is a critical component of protecting your data as most carriers also require robust cyber risk management tools and 24/7 monitoring.

We're Here to Help Protect Against Artificial Intelligence Cyber Risks

As AI technology continues to advance, so will its contribution to rising cyberattack frequency and severity. By keeping updated on the latest AI-related advancements and taking proactive measures to safeguard against its potential weaponization, you can keep your operations secure and ward off looming cyberthreats. Connect with a member of our team for additional guidance and coverage solutions. 


© Copyright CBIZ, Inc. and CBIZ CPAs P.C. (together, “CBIZ”). All rights reserved. Use of the material contained herein without the express written consent of the firms is prohibited by law. This publication is distributed with the understanding that CBIZ is not rendering legal, accounting or other professional advice. The reader is advised to contact a tax professional prior to taking any action based upon this information. CBIZ assumes no liability whatsoever in connection with the use of this information and assumes no obligation to inform the reader of any changes in tax laws or other factors that could affect the information contained herein.

CBIZ is the brand name for CBIZ CPAs P.C. and CBIZ Advisors, LLC (together), a national professional services company providing tax, financial advisory and consulting services to individuals, tax-exempt organizations and a wide range of growth-oriented companies. CBIZ Advisors, LLC is a fully owned subsidiary of CBIZ, Inc. (NYSE: CBZ). CBIZ CPAs P.C. is an independent CPA firm that provides audit, review and attest services, and works closely with CBIZ, a business consulting, tax and financial services provider. CBIZ and CBIZ CPAs P.C. are members of Kreston Global, a global network of independent accounting firms. This publication is protected by U.S. and international copyright laws and treaties. Material contained in this publication is informational and promotional in nature and not intended to be specific financial, tax or consulting advice. Readers are advised to seek professional consultation regarding circumstances affecting their organization.

Shield Your Business From Emerging AI-Powered Cyberthreats | Property & Casualtyhttps://www.cbiz.com/Portals/0/Images/GettyImages-1305874750-1.jpg?ver=1huvgK8sXiapIzZsZqajgQ%3d%3dDiscover how cybercriminals harness the power of AI, that could put your digital assets at risk. Arm yourself with the knowledge to protect your business.2023-09-11T17:00:00-05:00Discover how cybercriminals harness the power of AI, that could put your digital assets at risk. Arm yourself with the knowledge to protect your business.Risk MitigationProperty & Casualty InsuranceYes