Artificial intelligence (AI) poses exciting opportunities for hedge funds, alt funds, and similar financial firms. But along with those opportunities come complex regulatory challenges. For example, the U.S. Securities and Exchange Commission (SEC) has intensified its focus on AI regulations, proposing new rules and increasing examination priorities related to AI usage.
In our November 2024 article on the subject, we discussed the SEC’s 2025 AI examination priorities and how they are impacting financial firms in their efforts to use AI for compliant business functions. In this article, we expand on those insights with recommendations for how firms can leverage the help of partners to build regulatory-compliant AI solutions that work for them.
A Brief Review of AI Regulations in Financial Services
The SEC’s approach to AI regulations reflects the technology’s growing importance and potential risks. In 2025, the SEC’s Examination Division has made AI usage among financial firms a top focus area, investigating how advisors integrate AI into various aspects of their operations—including portfolio management, trading, marketing, and compliance.
“It’s the latest indication of an ever-increasing focus by the commission on registrants’ use of AI in their daily practices,” suggested WealthManagement.com in October 2024. “It’s all the more notable, considering artificial intelligence was barely mentioned in last year’s exam priorities and wasn’t cited in the 2023 release.”
Generally, the SEC places additional responsibilities on firms to ensure accuracy and provide oversight of the AI systems they employ; the agency has proposed new AI regulations targeting the unique compliance challenges AI presents, requiring firms to establish additional due diligence protocols to ensure AI usage complies with federal regulatory requirements.
Key areas of SEC focus include:
- Accuracy of AI capabilities and usage representations
- Adequacy of policies and procedures for AI supervision
- Protection against loss or misuse of client information in AI applications
- Management of AI-related conflicts of interest
- Marketing materials mentioning AI
- Continuity plans for AI system failures
You can see the SEC’s September 2024 Compliance Plan from David Bottom, Chief Artificial Intelligence Officer, here.
The SEC Has Already Taken Progressive Action on AI Regulations
Leaders at FIs should note that the SEC is acting on its newfound requirements for responsibility, transparency, and oversight regarding their use of AI. In March 2024, the SEC settled charges against two investment advisors for their misleading practices associated with AI.
“Investment advisers should not mislead the public by saying they are using an AI model when they are not,” said SEC Chair Gary Gensler at the time. “Public issuers making claims about their AI adoption must also remain vigilant about similar misstatements that may be material to individuals’ investing decisions.”
Still, Gensler—and by extension, the SEC—has taken a balanced perspective on the use of AI in the industry. Gensler has expressed a belief that AI will be a “net positive,” improving efficiencies and allowing for greater access to financial markets, FedScoop reports.
Challenges in Developing SEC-Compliant AI Systems
The responsibility falls to financial firms to strike a balance between AI adoption that drives ROI and honest customer care and aligning with regulatory requirements in a sustainable, cost-effective way. However, FIs face significant challenges when implementing AI systems that meet regulatory requirements—not to mention their own.
Regulatory Uncertainty
First, the changing regulatory landscape creates uncertainty for firms regarding how best to get started with AI adoption. In fact, most of the firms in a 2024 study by Mercer (51%) cite ethical and legal considerations as a top challenge: “Managers’ evident concerns around the risks of divergent regulation reflect the broader challenge of evolving fiduciary obligations… as all market participants assess the road ahead for AI regulation,” Rich Dell, Senior Director Investment Research at Mercer suggests.
Data Quality and Availability
According to the same Mercer study, 68% of investment managers currently using AI cite data quality and availability as the top barrier to unlocking AI’s full potential. Ensuring access to high-quality, relevant data is crucial for developing effective and compliant AI models.
Explainability and Transparency
The “black box” problem in AI—the lack of transparency into how systems work and their inability to explain their own functionalities—poses challenges for firms in justifying AI-driven decisions to regulators and clients. Understanding AI systems is crucial for internal rationale, legal protection, and providing the best possible information to customers.
Integration and Compatibility
Integrating AI with legacy systems may also pose challenges in terms of compliance. As we will demonstrate, partnering with third-party providers can help firms bridge the gap between legacy systems and new AI technologies, ensuring seamless integration and adherence to regulations.
Talent Acquisition
The industry faces fierce competition for AI and data science talent, making it difficult to build and maintain in-house AI expertise. “There’s a war for talent,” said one business leader in an International Monetary Fund article about AI spending among FIs. “Making sure you are ahead of it now is really life and death.”
Industry Partnerships for Successfully Complying with SEC AI Regulations
As suggested, industry partners that specialize in technology solutions for FIs can help firms gaps overcome regulatory hurdles, internal skills gaps, and difficulties in delivering exceptional customer experience. Consider the following advantages that can come from collaborating with technology providers, regulatory experts, and even other FIs.
Shared Expertise and Resources
Partnerships provide access to specialized AI talent, advanced technologies, and diverse datasets that may be scarce internally. This combination of financial domain knowledge with AI capabilities can help firms develop more sophisticated and effective AI solutions.
Faster Implementation
Collaboration with established AI providers accelerates development cycles and reduces time-to-market for new AI-driven products and services. For example, firms can start with pre-built AI components for faster integration and prototyping in controlled environments.
Enhanced Risk Management
Industry partnerships can improve firms’ ability to identify and mitigate AI-related risks. That’s because AI partners will have considered methods for fraud detection, data protection, and other related risk. AI partners may be able to provide more robust data security measures and comprehensive stress testing of AI systems under a wider variety of scenarios.
Regulatory Alignment
Working with regulation experts, including RegTech providers, may offer access to up-to-date regulatory intelligence and automated compliance processes. This can ensure AI systems consistently meet regulatory requirements; it can facilitate the development of explainable AI models to meet regulatory transparency standards as well.
Strategies for Successful Collaboration on AI Regulations
While partners may hold the key to AI compliance and success, you must set the groundwork internally—then incorporate your partners strategically—to deliver on those goals. Consider the following strategies as you begin your work with partners.
1. Establish a clear governance framework.
Develop robust AI governance structures that clearly define roles, responsibilities, and decision-making processes. Efforts should include:
- Creating AI committees or governance groups
- Developing comprehensive AI risk management frameworks
- Establishing clear lines of accountability for AI-related decisions and outcomes
2. Ensure sufficient data quality and security.
Implement strong data management practices to address these mission-critical challenges in AI adoption. Efforts should include:
- Deploying data management platforms that integrate, cleanse, and enrich data from various sources
- Implementing robust data security measures to protect client information
- Establishing clear data-sharing protocols with partners to maintain compliance with privacy and AI regulations
3. Implement rigorous testing and validation processes.
Develop comprehensive testing programs for AI tools to ensure they meet regulatory standards. Efforts should include:
- Conducting regular stress tests of AI systems to assess their performance under various scenarios
- Implementing continuous monitoring processes to detect and address potential issues promptly
- Collaborating with partners to develop industry-wide testing standards and best practices
4. Focus on transparency and explainability.
You can also address the “black box” problem by prioritizing explainable AI (XAI) techniques with your partners. Efforts should include:
- Working with technology partners to develop AI models that offer understandable insights
- Creating clear documentation of AI decision-making processes for regulatory review
- Implementing tools that can generate human-readable explanations of AI-driven decisions
5. Invest in talent development.
Your partners likely won’t be able to carry the whole load in terms of managing your new AI-driven solutions. You can address your own talent gap by combining partner resources with new hiring and training practices. Efforts should include:
- Collaborating with staffers, educators, and others to create pipelines for AI talent
- Developing comprehensive training programs to upskill existing employees
- Connecting with partners who can provide access to specialized expertise
Best Practices for Ongoing Compliance with AI Regulations
You may wish to continue your work with your AI partners as time goes on; or, you may find you have the right internal skills and resources to proceed on your own. Either way, maintaining compliance with AI regulations isn’t a given—it means staying agile as the regulatory landscape changes and addressing issues as they arise during regular operations. Consider the following techniques:
- Continuously monitor and adapt. Stay informed about regulatory changes and update AI systems accordingly.
- Maintain detailed documentation. Keep detailed records of AI development, testing, and decision-making processes to facilitate regulatory reviews.
- Conduct regular audits. Perform periodic assessments of AI systems to ensure ongoing compliance and effectiveness.
- Foster a culture of compliance. Ensure that all employees understand the importance of AI compliance and their role in maintaining it.
- Engage with regulators directly. Regulators want firms like yours to succeed as much as they want them to remain compliant. Maintain open lines of communication with the SEC and other regulatory bodies to stay ahead of regulatory requirements and drive your own success.
Conclusion: Future Trends in AI Compliance
Future AI compliance in finance will likely involve greater regulatory scrutiny, enhanced data privacy regulations, and a greater focus on ethical AI. Financial firms that effectively leverage partnerships to address challenges like data quality, talent acquisition, explainability, and regulatory uncertainty will be well-positioned as these changes arise. Consider how partnerships can help you meet these changing requirements and thrive as you prepare for a future with AI at your firm.
Partner with Option One Technologies
Option One Technologies specializes in SaaS, cybersecurity, and related solutions for financial firms. A partnership with Option One can be the springboard to your successful future with AI. Contact one of our experts today to learn more.