By OptionOne Technologies
We searched through the most popular cybersecurity websites to bring you the latest industry news, updates, and tips from January 2025.
Cybersecurity News
Hackers Still Struggle to Leverage AI, but the Threat is Evolving
Google researchers have highlighted both the potential risks and benefits of artificial intelligence as threat actors begin exploring its capabilities, Cybernews reported. Many are turning to proprietary AI systems to test their capabilities.
Most recently, cybercriminals and state-sponsored threat actors attempted to leverage Google’s Gemini AI for malicious purposes, such as evading defense mechanisms and amplifying their outreach.
“Government-backed attackers attempted to use Gemini for coding and scripting tasks, gathering information about potential targets, researching publicly known vulnerabilities, and enabling post-compromise activities, such as defense evasion in a target environment,” the researchers said.
However, these attempts were largely unsuccessful. Gemini declined requests for explicit malicious assistance, providing safety-oriented responses instead. Efforts to bypass Google’s account verification systems using the model also failed.
While present-day language models do not offer groundbreaking tools for cyberattacks, Google cautions that the fast-evolving AI landscape, paired with newer technologies, could empower future threats. Adversaries may capitalize on advancements in agentic systems and emerging AI capabilities over time.
Nonetheless, Google envisions AI revolutionizing digital defense strategies. Large language models are already proving their worth in analyzing complex data, improving code integrity, identifying vulnerabilities, and optimizing security operations. This dual impact underscores AI’s potential to drive both innovation and new challenges within the cybersecurity landscape.
Lynx Ransomware Group “Industrializes” Cybercrime with “User-Friendly” Platform
A formidable Ransomware-as-a-Service (RaaS) operator known as Lynx has gained notoriety for its industrial-scale approach to cybercrime, DarkReading reported. According to researchers at Group-IB, the group combines “user-friendly” ransomware builds, an organized affiliate system, and a meticulous management structure to streamline its operations.
A key aspect of their strategy includes a rigorous verification process for potential affiliates, ensuring high levels of operational security, quality control, and proficiency before membership is granted.
To mitigate threats posed by such groups, researchers emphasize the importance of robust cybersecurity measures, particularly for critical industrial sectors. Recommendations include implementing multifactor authentication and credential-based access, utilizing advanced endpoint detection and response solutions, regularly scheduling backups, applying timely updates, and fostering security awareness across organizations.
These proactive measures can help protect operations against the increasingly industrialized efforts of groups like Lynx.
“For organizations, this underscores the importance of continually updating incident response procedures, investing in real-time threat intelligence, and fostering a security-first culture,” said Group-IB in a blog post.
“As RaaS groups like Lynx push the boundaries of cyber extortion, only a proactive and adaptive defensive strategy will safeguard critical data and maintain business resilience.”
AI Surge Drives 1205% Increase in API Vulnerabilities
As a result of the massive implementation of integrated AI services, application programming interface (API) vulnerabilities have increased by 1,205% in the past year, Infosecurity Magazine reported. Those were the results of the “2025 API ThreatState Report” by API security firm Wallarm.
APIs are the rules and standards that allow different software applications to communicate with each other. Solution providers are racing to integrate new AI solutions into their products, which has contributed to the growing level of API security threats.
The study also found that 57% of AI-powered APIs were accessible externally, while 89% lacked secure authentication. Only 11% implemented robust security measures.
Wallarm recommends organizations implement real-time security controls to mitigate risks. These include secure real-time authentication techniques, frequent updates to new APIs, and the retirement or safeguarding of legacy APIs.
Cybersecurity Tips
Sophos Recommends Taking a “Thoughtful Approach” to AI-Enhanced Cybersecurity
A recent article by Sally Adam, Senior Director of Solution Marketing at Sophos, explored the reality of using AI for cybersecurity in a business context. Using a study of real-world insights from 400 IT leaders, Adam explained the risks and benefits of adopting AI for cyber defense purposes.
The Risks and Potential Pitfalls of Using AI in Cybersecurity
Concerns are consistently high across industries and company sizes, highlighting shared apprehension about striking the right balance between leveraging AI and maintaining human expertise.
Mismanagement of these risks may lead to weakened security and responsibility gaps in critical cybersecurity processes. Without proper reporting, organizations risk missing returns on AI investments in cybersecurity or misallocating funds toward initiatives that are not effective.
AI’s widespread use can also lead to overconfidence in its accuracy and capabilities. Assumptions that AI always outperforms humans might reduce human involvement, impacting cybersecurity.
Common concerns from IT leaders about using AI for cybersecurity include the following:
- Potential flaws in AI tools’ capabilities (89%)
- Pressure to reduce cybersecurity staff (84%)
- Diminished accountability due to reliance on AI (87%)
- Increasing costs of cybersecurity products due to AI integration (80%)
- Inability to quantify costs of AI in cybersecurity (75%)
Recommendations for Organizations
Adam also provided a list of recommendations to serve as a starting point for organizations, so they can mitigate risks.
- Set clear goals: Define specific and detailed outcomes you want AI to achieve.
- Quantify benefits: Assess the tangible impact AI investments will bring.
- Prioritize strategically: Focus on areas where AI can drive the greatest impact, using metrics like cost savings or risk reduction.
- Measure results: Regularly compare actual performance to initial expectations and adjust as needed.
- Adopt a human-first approach: Position AI as a tool to assist, not replace, human accountability in cybersecurity.
- Accelerate staff, don’t replace: Leverage AI to handle repetitive tasks while empowering staff with meaningful insights.
“AI technologies and human cybersecurity expertise work together to stop the broadest range of threats, wherever they run,” said Adam.
Thanks for Reading
That’s it for this month’s Cybersecurity Briefing. Contact us today to learn more about our services.