Read: Transforming Back Office Operations with Intelligent Automation

Cybersecurity menu button with cursor hovering overtop

The February 2026 Cybersecurity Briefing

Cybersecurity News

OpenAI Shares How Cybercriminals and State-Backed Threat Actors Leverage AI

OpenAI disclosed that it disrupted multiple clusters of accounts linked to state-backed threat actors and cybercriminals who were abusing ChatGPT to run influence campaigns, generate scam materials, and feed thousands of fake social media accounts. According to a report by Cybernews, the latest OpenAI report confirms widespread use of AI models to power both cybercrime and state-run influence operations.

The firm warned that threat actors are incorporating AI alongside more traditional tools such as websites and social media accounts.​

State-Backed Influence Operations

Among the disrupted operations was the Russian “Rybar” network, which used ChatGPT to generate batches of social media comments posted by accounts on X and Telegram. The posts appeared to originate from different parts of the world.

One X account with 600,000 followers posted AI-generated content that was viewed over 150,000 times.

In a separate case, OpenAI banned a ChatGPT account belonging to an individual associated with Chinese law enforcement who attempted to use the model to plan covert influence operations, including a campaign targeting the Japanese prime minister. The user’s reports indicated that 300 operators worked in their province alone, making over 50,000 posts across more than 200 Western platforms.​

AI-Powered Scam Operations

OpenAI also identified scam networks originating from Cambodia that used ChatGPT to power semi-automated romance scams, generating fake identities and promotional materials to defraud victims. One network targeted wealthy Indonesian men using social media ads.

Another operation abused ChatGPT to impersonate attorneys, fake law firms, and even the FBI to defraud previous scam victims with fake recovery scams.

These case studies underscore the growing risk of AI-enabled social engineering and the importance of monitoring how threat actors adopt AI tools across platforms.​

IBM Finds 44% Surge in App Exploits Due to AI-Enhanced Attacks

The newly published 2026 IBM X-Force Threat Intelligence Index report reveals a 44% increase in cyberattacks exploiting public-facing applications. The report points to missing authentication controls and AI-enabled vulnerability scanning as major drivers behind the spike.

According to a report by Infosecurity Magazine, vulnerability exploitation emerged as the leading cause of incidents in 2025. It accounted for 40% of cases observed by IBM X-Force, while active ransomware and extortion groups grew 49% year over year.​

AI Lowers Barriers and Accelerates Attacks

“Attackers aren’t reinventing playbooks, they’re speeding them up with AI,” said Mark Hughes, global managing partner for cybersecurity services at IBM. “The core issue is the same: businesses are overwhelmed by software vulnerabilities. The difference now is speed.”

The report found that large supply chain and third-party compromises have nearly quadrupled since 2020, as attackers increasingly target software build and deployment environments alongside SaaS integrations. IBM also observed threat actors using AI to conduct research, analyze large data sets, and refine attack paths in real time. In one example, North Korean IT worker schemes employed AI-driven image manipulation to create synthetic identities.​

Key Findings for Financial Firms

Additional findings from the report include the exposure of over 300,000 ChatGPT credentials by infostealer malware in 2025. North America was also the most attacked region for the first time in six years, representing 29% of observed cases.

Manufacturing accounted for 27.7% of incidents, marking its fifth consecutive year as the most targeted sector.

For financial institutions, these findings reinforce the urgency of addressing application vulnerabilities, monitoring third-party integrations, and preparing for AI-accelerated attack timelines.​

Microsoft Copilot’s Leak of Private Emails Shows AI Agents Can Ignore Cybersecurity Policies

A bug in Microsoft 365 Copilot caused the AI assistant to read and summarize confidential emails for approximately four weeks. It simply bypassed the data loss prevention (DLP) policies that organizations rely on to protect sensitive information.

According to a report by DarkReading, the incident is part of a troubling pattern in which AI agents ignore the cybersecurity policies they are explicitly configured to follow.

Details of the Incident

First detected on January 21, 2026, the bug (tracked as CW1226324) affected the Copilot Chat “work tab” feature, which incorrectly processed emails stored in users’ Sent Items and Drafts folders. These included messages carrying confidentiality labels designed to restrict automated access.

Microsoft confirmed that a code error was responsible and began rolling out a fix in early February.

The UK’s National Health Service was among the affected organizations. This marks the second time in eight months that Copilot’s retrieval pipeline violated its own trust boundary, following the critical EchoLeak zero-click vulnerability (CVE-2025-32711) patched in June 2025.

Implications for Enterprise AI Security

Traditional cybersecurity tools like endpoint detection and response (EDR) and web application firewalls (WAFs) were not designed to detect scenarios where an AI assistant violates its own trust boundary.

According to a BBC report on the topic, Gartner analyst Nader Henein noted that “this sort of fumble is unavoidable” given the pace at which new AI capabilities are being introduced. Organizations often lack the tools to effectively manage each new feature, which leads to cracks in cybersecurity systems and reveals gaps in policies.

Financial institutions deploying AI productivity tools should conduct thorough reviews of sensitivity labeling configurations and ensure robust governance frameworks are in place before granting AI agents access to sensitive data.

Cybersecurity Tips

Senior IT Consultant Warns CIOs About Emerging Implications of Enterprise AI

In a column for CIO Magazine, titled “AI is about to get really weird. CIOs better be prepared,” senior IT consultant Bob Lewis warned that AI’s evolving capabilities are creating risks that go far beyond traditional cybersecurity concerns. CIOs are left with the impossible task of preparing for scenarios they may not yet imagine.

According to the article, IT leaders should consider the legal, ethical, and operational consequences of deploying AI systems that can produce inaccurate or harmful outputs.​​

Liability When AI Gets It Wrong

Lewis highlighted the case of Wolf River Electric, a Minnesota-based solar contractor. The company’s customers began cancelling contracts after Google search results falsely stated the company had settled a lawsuit with the state attorney general over deceptive sales practices.

The lawsuit never happened.

Lewis asks CIOs to consider: if your company deploys an AI evaluation system using a popular AI platform and something goes wrong, who is at fault — the AI vendor, the large language model provider, or the internal quality assurance team? In most organizations, he warns, IT would be “left holding the bag.”

While the courts will eventually sort out how liability is distributed in these cases, Lewis cautioned that there are no established best practices or reliable methodologies to fall back on. This makes it imperative that IT and cybersecurity leaders proactively assess their exposure.

The “AI Weirdsville” Scenario

Lewis went further, illustrating how quickly AI risks can escalate into truly unprecedented territory.

He described a hypothetical in which a wealthy, prominent individual dies and leaves behind a wealth of content, such as speeches, essays, blog posts, and video clips. It is not infeasible that such content would be enough to train a generative AI model capable of producing new material indistinguishable from the deceased’s own style and voice.

This hypothetical can be paired with the emerging concept of “volitional AI.” This refers to an AI that sets its own goals rather than simply following instructions.

Lewis imagined a scenario in which such a system could attempt to claim the deceased person’s identity, assets, and legal rights. While Lewis acknowledged the scenario started as satire, he argued that it highlights a real and urgent concern: AI is “furthering the war on reality — one that reality looks to be losing, with no obvious reason for optimism in sight.”

Lewis urged CIOs and IT leaders to put mechanisms in place for identifying business requests that are plausibly achievable with AI but are, at the same time, “seriously bad ideas.” He recommends that companies update their strategic planning frameworks — such as TOWS (threats, opportunities, weaknesses, strengths) — to account for AI’s unintended consequences.

Inevitably, leaders must also prepare for what he calls “unknown unknowns.” His core message: alongside excitement about AI’s evolving capabilities, enterprise leaders must cultivate what he described as “a healthy dose of fear” and ensure governance keeps pace with innovation.