Cybersecurity News
Supply Chain Attack on Mortgage Vendor Exposes Wall Street Bank Customer Data
SitusAMC, a real estate finance firm processing mortgage payments for JPMorgan, Morgan Stanley, Citi, and the top 20 US banks, disclosed a breach on November 24th that may have exposed customer data from multiple financial institutions, Cybernews reported.
The company discovered the attack on November 12th. It confirmed that malicious actors accessed corporate records, legal agreements, and client customer information, though the full scope remains under investigation.
FBI Investigation Confirms No Operational Impact
The FBI has acknowledged the incident and is investigating its extent. Director Kash Patel confirmed that that was no operational impact on banking services.
Unlike recent vendor attacks, SitusAMC stated “no encrypting malware was involved,” suggesting a data exfiltration rather than ransomware.
The breach highlights the systemic risk posed by third-party vendors. A single compromise affects hundreds of financial institutions, exposing personally identifiable information and financial account details that enable identity theft and targeted social engineering.
Financial Institutions Must Strengthen Vendor Oversight
Banks should immediately audit their SitusAMC data exposure and review vendor risk management frameworks. Organizations must assess supply chain security requirements, enforce data minimization practices with third parties, and establish incident response protocols for vendor compromises.
This attack follows the pattern of vendor targeting seen in the recent Marquis breach affecting 700+ banks, underscoring that supply chain security is now a critical business continuity imperative.
OpenAI API User Data Exposed in Analytics Vendor Breach
OpenAI disclosed on November 26th that an attacker compromised Mixpanel, its data analytics vendor, and exported customer information from API users between November 9th and 25th. According to a report by Infosecurity Magazine, the breach exposed API account metadata, including:
- Name and email address associated with the API account
- Approximate geographic location based on user browser (city, state, country)
- Operating system and browser used to access the API account
- Referring websites and organization or user IDs
Importantly, OpenAI confirmed that ChatGPT conversations, prompts, API requests, passwords, credentials, API keys, payment details, and government IDs were not compromised.
Vendor Compromise Demonstrates Supply Chain Data Risks
The attack targeted Mixpanel’s analytics platform, which OpenAI used to track API product usage patterns. While the dataset was limited to metadata rather than core AI interactions, the exposed information creates a phishing risk.
OpenAI has terminated Mixpanel’s access to production services and is conducting expanded security reviews across its vendor ecosystem. The company is notifying potentially affected organizations as it investigates the incident with Mixpanel’s security team.
API Users Must Heighten Phishing Defenses
OpenAI warned that the most likely threat from this breach is sophisticated phishing and social engineering campaigns. Security teams should brace for credible-looking emails that reference the exposed metadata.
API administrators must enable multi-factor authentication and verify that all communications originate from official OpenAI domains. They must also train staff to know that OpenAI never requests passwords, API keys, or verification codes via email or chat.
Organizations should audit their API account access logs for suspicious activity and treat any unexpected communications with heightened caution.
Cybersecurity Tips
DarkReading Poll Reveals the “Depressing State” of Cybersecurity Awareness
An October 2025 poll conducted by DarkReading has revealed that cybersecurity awareness is still a significant challenge at many organizations.
“Despite years of cybersecurity awareness campaigns, training sessions, and technological advances, the same fundamental security challenges continue to plague organizations worldwide,” a report about the poll said.
Three seasoned journalists from DarkReading, TechTarget Search Security, and Cybersecurity Dive examined the poll. They discussed how simple measures like password hygiene and phishing prevention remain poor despite significant resources devoted to awareness programs.
Password Policies Stuck in the Past
In the management of passwords, organizations remain reliant on outdated protocols rather than modern security recommendations:
- Only 30% of companies continue using eight-character passwords with mandatory uppercase letters, numbers, and special characters that expire every 90 days.
- Only 17% have adopted NIST-recommended passphrases like “my cat clarinet loves Sam,” despite these being exponentially harder to crack.
- While 34% have implemented single sign-on solutions and 21% use password vaults, far too many remain trapped in archaic password policies.
Phishing Attacks Exploit Human Psychology
According to research from SiteGuarding, 64% of executives have clicked on phishing links, with 17% never reporting these incidents despite corporate policies requiring disclosure. As AI makes phishing emails increasingly sophisticated and personalized, attackers are exploiting human psychology faster than training programs can teach recognition skills.
The discussion highlighted that security teams face an escalating arms race. Defenses remain strong, but attackers excel at bypassing them through increasingly convincing digital deceptions.
Traditional Training Programs May Increase Risk
Perhaps most disheartening, the journalists revealed that conventional security awareness training might actually worsen the problem.
Studies dating back to 2008 show that traditional annual training sessions and “gotcha” phishing simulations don’t reduce click rates. Sometimes, they even create dangerous overconfidence.
Behavioral psychologists note these programs fundamentally misunderstand human decision-making, treating security awareness as a technical problem rather than a deeply human one. The fear-based, shame-based model runs counter to behavioral psychology principles, focusing on delivering knowledge rather than shaping behavior.
Training developed by cybersecurity experts without behavioral science expertise fails to address why people make risky decisions, leaving organizations with compliant but insecure cultures.
Understanding the Cybersecurity Talent Pipeline in the Age of AI
AI is eliminating repetitive tasks that historically were performed by junior analysts, according to an analysis by DarkReading. This is creating a paradox where efficiency gains may undermine the development of future security leaders.
Log review, alert triage, and basic investigations are all critical for cybersecurity professionals, as they support them in building intuition and pattern recognition. As these processes are automated, it potentially leaves tomorrow’s defenders without foundational expertise.
Junior Roles Declining as Automation Accelerates
The impact on entry-level positions is already visible:
- 52% of cybersecurity professionals believe AI will reduce entry-level staff needs.
- Recruiters report hiring dropping from five to two or three analysts per team.
- 31% anticipate new roles emerging in automation oversight and threat hunting.
Organizations Must Redesign Learning Pathways
Security leaders warn that repetitive tasks teach analysts what “normal” looks like, developing the muscle memory essential for crisis decision-making.
Visa addresses this through intentional rotations across prevention, detection, and response. The company uses hackathons and a “90/10 model” where analysts spend 10-20% of their time outside their primary domain. The concern extends beyond technical skills to cultural and strategic understanding.
As AI hollows out traditional training work, organizations must deliberately create leadership development pathways or risk talent retention crises. Recommended approaches include:
- Implementing rotation programs across prevention, detection, and response functions
- Using simulated cyber ranges and tabletop drills to practice alert triage and incident response at scale
- Deploying AI as a teaching engine where junior analysts query AI decisions to accelerate learning
- Creating automation oversight roles to validate AI/ML decisions and tune tools
- Starting cybersecurity training earlier through high school academy programs
