Financial services firms are rapidly integrating artificial intelligence (AI) into their operations. This influx of technology creates new efficiencies and has the potential to innovate new financial products. However, AI comes with challenges and potential risks associated with security and privacy. This is why the SEC’s 2025 examination priorities will focus on AI.
“While perhaps not quite yet a perfect storm, there’s certainly one brewing around AI,” said former SEC enforcement head Gurbir Grewal, according to an article by Corporate Compliance Insights. “It is incumbent on each of us to make sure it does not come to pass and that investors are not harmed by noncompliance with the securities laws when it comes to this new technology.”
Here, we will explore the SEC’s AI examination priorities for AI. We’ll also reveal what they mean for financial services firms looking to artificial intelligence as a driver.
Understanding the SEC’s AI Examination Priorities
The Securities and Exchange Commission (SEC) is responsible for regulating and overseeing the securities industry in the United States. This includes enforcing laws, conducting examinations, and providing guidance to market participants.
In recent years, the SEC has recognized the increasing use of AI in financial services. It is being used in areas such as trading algorithms, robo-advisors, and risk management systems. While these advancements can bring great benefits to investors, they also bring potential risks.
How Will the SEC Examine AI?
According to a report published on the Harvard Law School Forum on Corporate Governance, one of the SEC’s primary focuses will be on examining investment advisors. However, within this category, there will be an increased focus on compliance among advisors that integrate AI.
“Examinations of advisers that integrate AI into advisory operations will look in-depth into compliance policies and procedures, as well as disclosures to investors, related to their use of AI,” the report said.
Specifically, the division will make four primary assessments in this area:
- Ensuring representations of advisors are fair and accurate: The SEC intends to determine if advisors are informing investors that they are using AI and not “AI-washing.”
- Ensuring controls that are in place are consistent with disclosures to investors: The division will ensure the controls and security measures advisors have in place for AI in their operations line up with what they tell clients and other stakeholders.
- Ensuring AI tools produce results consistent with advisors’ stated investment profiles and strategies: This is to ensure advisors are producing AI-based recommendations that are consistent with their stated approaches to investing.
- Ensuring controls are in place so AI-produced results are consistent with regulatory obligations: Advisors must implement measures to only generate compliant recommendations using their AI tools.
Key Areas of Focus for the SEC
To accomplish these measures, the SEC has proposed a rule that would require advisors to neutralize conflicts of interest posed by the use of AI technology. The SEC will have a similar focus on other financial services entities, including broker-dealers.
As such, the SEC’s 2025 AI examination priorities will focus on four key areas related to AI:
- Data Governance: The agency is concerned with how financial firms are collecting, storing, and using data in their AI systems. This includes issues such as data quality, security, and privacy.
- Algorithmic Trading: With the rise of algorithmic trading in the financial industry, the SEC is closely monitoring how these systems are being developed, tested, and implemented. They will be examining for potential market manipulation or other violations of securities laws.
- Risk Management: The SEC is also looking at how firms are managing the risks associated with AI, such as operational and cyber risks. They will be evaluating firms’ policies and procedures for identifying and addressing potential risks.
- Fairness and Transparency: As with any new technology, there are concerns about fairness and transparency in AI systems. The SEC will be examining whether firms are implementing appropriate controls to ensure their AI systems are not biased or discriminatory.
As the SEC ramps up its focus on AI, firms must pay attention to how they are utilizing this technology.
What the SEC’s AI Examination Priorities Mean for Financial Services Firms
Artificial intelligence could significantly transform the financial services industry to improve financial advice, product development, and speed. However, if it is implemented or used improperly, it could also introduce risk or result in organizations running afoul of regulations.
The primary purpose of the SEC’s focus on AI is to ensure financial firms that use artificial intelligence have lived up to their fiduciary duty to investors and clients regarding investment strategies, financial products, and account types.
In short, organizations that implement AI in their advisory services can expect increased scrutiny from the SEC.
“New for 2025, the division’s priorities state that, if an adviser incorporates artificial intelligence into its advisory operations, the division may look ‘in-depth’ at the adviser’s compliance policies and procedures and disclosures related to the use of artificial intelligence,” says a report published in Bloomberg Law. “
How Firms Can Prepare the SEC’s AI Examination Priorities
The SEC’s focus areas for 2025 are similar to those of previous years, save for the focus on AI and other technology-enabled services. Nonetheless, the SEC has taken an aggressive stance toward organizations that do not meet compliance requirements.
For this reason, all firms should increase their focus on due diligence, even if they met regulatory compliance in previous years.
“The division’s overall focus on investment advisers to private funds and protecting retail investors suggests the division will continue to be aggressive in identifying deficiencies among broker-dealers and investment advisers and referring cases to the Division of Enforcement,” the Bloomberg Law article says.
Firms that have recently implemented AI into their advisory operations, as well as those that are planning new AI implementations, should be prepared for more direct attention from the division.
The following are some critical steps firms can take to prepare:
Developing Robust Compliance Policies
Firms need to create comprehensive compliance policies that are tailored specifically for AI implementation. This involves not only adhering to existing regulations but also anticipating potential issues that AI systems might introduce.
By documenting clear guidelines on how AI should be used in various advisory processes, firms can align their operations with regulatory expectations and be prepared for heightened AI examination priorities for regulators.
Ensuring Adequate Controls Around AI Systems
Implementing robust control measures around AI systems is essential. This includes establishing checks and balances to prevent unauthorized access and usage of AI models. Having a structured process for monitoring AI activities will help firms ensure that their AI tools operate within the boundaries of compliance requirements. Regular audits and system evaluations can detect and remedy deviations before they escalate.
Creating Detailed Documentation
Documentation is a critical aspect of managing AI systems. Firms should maintain thorough records detailing their AI models, the data sources they utilize, and decision-making processes. This transparency fosters trust and allows for easier inspection by regulatory bodies. Comprehensive documentation also aids internal understanding and allows stakeholders to grasp the intricacies of AI operations.
Establishing Governance Structures
A well-defined governance structure is vital for overseeing AI activities. Designating specific teams or managers to take responsibility for AI oversight can streamline processes and ensure accountability. This governance should focus on both strategic alignment with business goals and adherence to compliance obligations, thus minimizing risks associated with AI usage.
Enhancing AI-related Disclosures
Clear and comprehensive disclosures about AI usage in advisory operations are fundamental for shoring up the business based on the SEC’s AI examination priorities. Firms should transparently communicate with clients how AI contributes to their services and the associated risks. Keeping clients informed not only enhances trust but also ensures that firms fulfill their fiduciary responsibilities.
- Provide precise details about the implementation and function of AI in services.
- Clearly state potential risks, limitations, and benefits of AI deployment.
- Regularly update regulatory filings to reflect AI integration advancements.
Implementing AI Risk Management Practices
Risk management is at the heart of AI deployment in the financial sector. Firms should regularly conduct risk assessments to identify and address potential vulnerabilities in their AI systems. Training programs for employees on the proper use of AI can mitigate misuse and cultivate a culture of compliance.
- Regularly evaluate the performance and decisions of AI models.
- Integrate training initiatives focusing on AI’s risks and best practices.
- Develop protocols for promptly addressing any AI-related incidents.
Enhancing Cybersecurity Measures
Significant emphasis should be placed on cybersecurity to safeguard AI systems. By implementing stringent data protection strategies, firms can secure the integrity of their AI tools and the sensitive data they handle.
- Establish robust access controls and authentication methods.
- Continuously enhance data protection measures for all AI systems.
Conducting AI Audits and Testing
Routine audits and testing are imperative to verify that AI models operate correctly and impartially. These procedures help ensure AI-driven recommendations are aligned with clients’ investment profiles and strategies. Additionally, bias testing can be used to validate the fairness of AI outputs.
- Monitor AI performance consistently to detect and rectify biases.
- Ensure that AI recommendations are in harmony with the client’s financial objectives.
Maintaining Consistent Records
Firms should diligently keep records of all AI-driven recommendations and decisions. Documenting the rationale behind AI model choices and any adjustments or reviews undertaken is crucial for accountability and continuous improvement.
By adhering to these guidelines, financial services firms can reinforce their AI practices, demonstrate commitment to regulatory compliance, and build robust, future-proof processes that satisfy the SEC’s examination requirements.
Build an SEC-Compliant AI Program at Your Firm
Whether your firm is already using AI in an advisory capacity or planning a future implementation, you should take steps now to ensure you stay compliant with SEC regulations and are prepared for increased AI examination priorities at the division. Following the steps outlined above is a good start, but you can also partner with AI experts to reduce risk and enhance the value of your investment.
To learn more about how you can implement and leverage AI, generate more value from the technology, and ensure compliance, contact us at Option One Technologies today.