Home Business & Economics Proxy Advisors Under Artificial Intelligence: Unverified Reasoning in Shareholder Voting Recommendations
Article Open Access

Proxy Advisors Under Artificial Intelligence: Unverified Reasoning in Shareholder Voting Recommendations

  • Masaki Iwasaki ORCID logo EMAIL logo
Published/Copyright: August 14, 2025
Become an author with De Gruyter Brill

Abstract

Proxy advisory firms exert considerable influence in corporate governance by issuing voting recommendations to institutional investors. Despite their growing impact on shareholder decisions and company practices, these firms remain opaque about how their recommendations are formulated. This paper examines how the actual or potential use of AI in proxy advisory services may reshape existing concerns about transparency, undue influence, and conflicts of interest. While AI is currently used in preparatory processes such as information extraction and classification, its deeper integration could exacerbate existing concerns and introduce new risks by weakening institutional accountability. Still, AI offers benefits in efficiency and evaluative consistency, making it necessary to consider how these advantages can be harnessed responsibly within proxy advisory functions. This paper argues that effective AI governance requires both internal and external safeguards, and sets out institutional conditions for the responsible integration of AI into proxy advisory services.

1 Introduction

Proxy advisors (voting recommendation firms) play a pivotal role in corporate governance by providing institutional investors with recommendations on how to vote at shareholder meetings. These firms influence key corporate decisions, including the election of directors, executive compensation, and governance policies, thereby exerting a substantial impact on the management structures of publicly listed companies around the world. Given the magnitude of their influence in the market, it is essential to scrutinize the methodologies behind their recommendations and to assess whether these recommendations are produced in an objective and unbiased manner. Despite their prominence, however, the process by which proxy advisors formulate their advice remains opaque, giving rise to concerns regarding accountability, undue influence, and conflicts of interest. This paper focuses on one particularly salient aspect of those concerns: the actual or potential use of artificial intelligence (AI) in proxy advisory services and the implications such use may have for corporate governance.

Despite the significant influence that proxy advisors exert over institutional investors’ voting behavior, there is extremely limited disclosure regarding how their voting recommendations are formulated. The evaluation criteria, weighting, and operational processes underlying their advice remain largely opaque to external observers. Moreover, there is little empirical evidence demonstrating that these recommendations consistently have a positive impact on shareholder value or corporate strategy, and assessments are divided as to whether such advice aligns with long-term firm performance. In addition, the potential conflicts of interest inherent in the proxy advisory business model – arising from the coexistence of advisory and consulting services and from structural incentives whereby the market value of recommendations increases as controversies expand – have raised fundamental doubts about the neutrality and reliability of proxy advice.

In response to these concerns, countries have adopted varying institutional approaches. In the United States, the initial regulatory focus was on institutional investors who use proxy advisors, requiring them to manage conflicts of interest and to independently assess the quality of the advice they receive.[1] In 2020, a rule was introduced to directly regulate proxy advisors by classifying their advice as “solicitations” and mandating conflict-of-interest disclosures and opportunities for companies to respond. However, part of this rule was rolled back in 2021, and voluntary practices have since taken precedence. In contrast, the European Union introduced a legal framework under the 2017 Shareholder Rights Directive II (SRD II), which requires proxy advisors to disclose their methodologies and evaluation criteria and to ensure transparency in managing conflicts of interest.[2] This has resulted in a dual-layered system combining oversight by the European Securities and Markets Authority (ESMA) and industry self-regulation through the Best Practice Principles Group (BPPG).

In Asia, regulatory responses vary.[3] In Japan and South Korea, the main instruments are soft-law guidelines based on national stewardship codes, whereas in India, a relatively comprehensive legal framework has been established under the Securities and Exchange Board of India (SEBI), including a registration system, comment procedures, and a grievance mechanism. While the regulatory landscape differs widely across regions, the question of how proxy advisors should be institutionally positioned and regulated remains a shared policy challenge.

In recent years, the introduction of AI has begun to exert a new layer of influence on the existing institutional framework for proxy advisors. At present, AI is primarily used in preliminary stages such as information extraction, classification, and scoring (ESMA 2023), but there is a growing possibility that it will eventually become involved in the formulation of proxy recommendations themselves. Such use entails serious risks – including the black-boxing of decision-making processes, the acceleration of both unjustified convergence and divergence in proxy advice, and the institutional embedding of conflicts of interest through training on historical data – which could exacerbate preexisting concerns regarding opacity, undue influence, and conflicts of interest. Moreover, AI introduces additional challenges by obscuring who is actually involved in the judgment process and where accountability lies, thereby potentially undermining the institutional basis for corporate actors to contest recommendations or demand explanations.

Despite these potential concerns, AI offers institutional advantages from two perspectives: accelerating information processing and enhancing the consistency of evaluative judgments. AI can efficiently process large volumes of unstructured data in a short time, enabling faster analysis of complex proposals and legal documents that previously required manual review. It also allows for the application of pre-formulated evaluation logics, which can reduce individual-level variability and arbitrariness, thereby facilitating more impartial assessments. Accordingly, the introduction of AI should not be categorically rejected, but rather reconsidered in terms of how its characteristics – such as efficiency and consistency – can be harnessed within a regulatory framework. To this end, it is essential to ensure several institutional conditions, including transparency in processing, supervisory access to the underlying evaluation logic, safeguards against the structural reproduction of conflicts of interest, and the clear attribution of responsibility. As long as these requirements are met, AI can be reasonably incorporated as a core component of proxy advisory services.

The structure of institutional requirements for the use of AI is clearly reflected in the ESG Rating Regulation adopted by the European Union in 2024.[4] This regulation requires detailed disclosure not only of the structure of evaluation models, input data, and conflict-of-interest management policies, but also of the extent to which AI technologies are used. It thus demonstrates a clear institutional commitment to ensuring transparency and explainability in AI-based evaluations of ESG-related information. Importantly, these principles are not unique to ESG assessments; rather, they represent a general regulatory framework for AI-assisted decision support, and should be considered equally applicable to proxy advisory services.

To make such institutional requirements function effectively in practice, it is essential to adopt complementary measures that reflect the specific characteristics of AI technology. In particular, when the judgment formation process is automated by algorithms, its structure is inherently difficult to observe from the outside. Even when model architectures and input data are disclosed, there are institutional and technical limits to third-party evaluations of the validity of outputs or the presence of underlying biases. Given this black-box nature of AI, the establishment of whistleblower systems – including protection mechanisms and monetary incentives – becomes indispensable for detecting and correcting misconduct or bias in AI-generated judgments.

This is especially critical in the initial phases of model design, data selection, and the setting of weightings, where discretionary choices or organizational pressure may be introduced, and internal reports by those involved can play a decisive role. In this regard, a notable example is the proposed AI Whistleblower Protection Act, introduced as a bill in the U.S. Senate in 2024.[5] The bill explicitly protects employees and contractors who report violations of AI-related laws or substantial risks – including not only disclosures to external authorities but also internal reports to supervisors or designated compliance personnel. In the age of AI, whistleblower systems should be recognized not as optional safeguards, but as essential institutional infrastructure for ensuring the integrity of AI-supported advisory functions, including proxy voting recommendations and related governance services.

This paper is structured as follows. Section 2 outlines key debates surrounding the business practices and influence of proxy advisors. Section 3 analyzes regulatory approaches in the United States, the European Union, and selected Asian jurisdictions. Section 4 examines the current and potential use of AI in proxy advisory services, emphasizing how it may both deepen existing concerns and create novel institutional challenges. Section 5 sets out institutional preconditions for the responsible use of AI, highlighting the role of whistleblower systems as a necessary complement to external safeguards. Section 6 concludes.

2 Ongoing Debates on the Issues with Proxy Advisors

Proxy advisory firms hold a significant position in corporate governance by shaping institutional investors’ voting decisions on a global scale. Their influence is substantial and can significantly impact corporate decision-making, yet they operate with minimal transparency and regulations, raising many concerns. The debates surrounding these firms center on three key issues: the opacity of their recommendation process, the potential for undue influence over corporate decision-making, and the conflicts of interest inherent in their business models. Although current debates rarely address the use of artificial intelligence explicitly, these are precisely the areas where AI adoption may exacerbate existing problems. This section reviews the prior literature on these issues to establish the baseline understanding of the structural concerns that AI technologies may further intensify. While most studies focus on the U.S. market, it is important to note that many of the core findings are applicable to, or at least informative for, other jurisdictions as well.

2.1 Lack of Transparency in the Recommendation Process

One of the most persistent criticisms of proxy advisory firms is their lack of transparency regarding how voting recommendations are formulated. While institutional investors rely on these recommendations for proxy voting decisions, the specific methodologies used to assess governance issues remain unclear. Proxy advisors provide benchmark voting policies that specify general criteria for evaluating common shareholder meeting proposals, but they are not required to disclose in detail how individual recommendations are determined, including the weight assigned to different governance factors or the decision-making models used. This limited disclosure raises broader concerns about the reliability and objectivity of proxy advice, making it difficult to assess whether recommendations genuinely contribute to improving corporate governance and shareholder value.

This issue is further complicated by the fact that proxy advisors do not merely aggregate investor preferences but also exercise considerable discretion in shaping governance decisions. Thomas et al (2012) suggest that proxy advisors act as intermediaries that coordinate institutional investors’ voting behavior, thereby improving corporate governance. However, Larcker et al. (2013b) argue that proxy advisors go beyond mere aggregation, instead imposing their own governance standards through their recommendations. This distinction is crucial because if proxy advisors actively influence governance structures rather than simply reflecting investor sentiment, the lack of transparency surrounding their methodologies becomes even more problematic. Without clear disclosure, it is unclear whether their recommendations genuinely serve shareholder interests or introduce governance norms driven by their own biases and incentives.

Researchers have sought to evaluate whether proxy advisor recommendations contribute to shareholder value, as proxy advisors themselves do not systematically disclose assessments of their recommendations’ effectiveness. The findings, however, remain inconclusive. Some studies suggest that proxy advisors assist institutional investors in improving governance, leading to positive shareholder outcomes. For instance, Alexander et al. (2010) show that recommendations by Institutional Shareholder Services (ISS) in contested director elections tend to align with positive shareholder returns, indicating that proxy advisors may enhance governance effectiveness in high-stakes scenarios. Similarly, Dey et al (2024) find that firms receiving low say-on-pay support and subject to ISS scrutiny exhibit increased and sustained shareholder engagement, and that markets respond positively when such engagement is anticipated, suggesting that ISS influences governance not only through standardized recommendations but also by promoting firm-specific dialogue with investors.

However, other studies raise concerns about the potential misalignment between proxy advisor guidelines and long-term corporate performance. Larcker et al. (2013a) find that stock option repricing plans conforming to ISS recommendations are associated with lower returns, diminished operating performance, and increased employee turnover, indicating that proxy advisor preferences may not always align with optimal firm strategies. Additionally, Daines et al (2010) conclude that ISS governance ratings do not reliably predict future operating or stock returns, casting doubt on the extent to which proxy advisor recommendations serve as reliable indicators of shareholder value. Further, Larcker et al (2015) report that companies adjusting executive compensation programs to align with ISS’s say-on-pay guidelines experience negative shareholder reactions, suggesting that rigid adherence to proxy advisor standards may sometimes conflict with market preferences.

These mixed findings underscore the need for further research into the mechanisms by which proxy advisors formulate recommendations and how their methodologies influence long-term corporate performance. Given the nature of their services, a natural question arises: as generative AI advances, to what extent are proxy advisors integrating AI into their decision-making processes, and how are the associated risks being managed? While studies have explored the consequences of proxy advisory guidance, little attention has been given to the role of AI in shaping these recommendations. Indeed, similar concerns have already emerged in other advisory services, such as credit rating agencies, where AI-driven decision-making has raised issues of transparency, fairness, and accountability, prompting regulatory scrutiny and debate (ESMA 2024). Given these precedents, it is essential to examine whether proxy advisory services face comparable risks. Section 4 explores these questions in greater detail.

2.2 Undue Influence Over Corporate Decision-Making

Proxy advisory firms are designed to influence institutional voting behavior by providing recommendations on governance matters. The critical question is not whether they exert influence – this is their intended function – but rather how extensive that influence is and whether it reflects sound governance principles or creates distortions in corporate decision-making. If proxy advisors simply provide research that informs independent investor decisions, their role may be unproblematic. However, empirical evidence suggests that their recommendations have a substantial impact on voting outcomes, often in ways that raise concerns about investor autonomy and the broader implications for corporate governance.

Empirical research has sought to quantify the extent of this influence. Brav et al (2022) find that an ISS negative recommendation decreases institutional investors’ support for management proposals by 50.7 percentage points, compared to a 1.8 percentage point decline among retail investors. Similarly, Malenko and Shen (2016) estimate that an ISS negative recommendation on say-on-pay proposals leads to a 25 percentage point drop in shareholder support. Further studies report similar trends. Based on aggregate voting data, Copland et al (2018) show that institutional investors are substantially more likely to vote in favor of management when ISS issues a favorable recommendation. Say-on-pay proposals receive 27.7 percentage points more support, equity-plan proposals 17.3 points more, and uncontested director elections 13.8 points more, compared to cases in which ISS recommends against. These findings indicate that proxy advisors do more than just provide advisory input – they significantly shape voting patterns among institutional investors. The key issue is whether this influence results from the intrinsic quality of their recommendations or if it arises due to other institutional or regulatory factors that distort independent decision-making.

Ideally, proxy advice would serve as one input among many, allowing institutional investors to make independent, well-informed voting decisions. However, in practice, some investors adopt proxy recommendations with minimal scrutiny, raising concerns about over-reliance on these firms. Rose (2021) defines “robo-voting” as the practice of institutional investors automatically voting in near-total alignment with proxy advisor recommendations, typically without independent analysis. He finds that in 2020, 114 institutional investors engaged in this practice – 86 % with ISS and 14 % with Glass Lewis – collectively managing over $5 trillion in assets.[6] Similarly, Iliev and Lowry (2015) find that over 25 % of mutual funds vote in near-complete alignment with ISS recommendations across all portfolio firms, suggesting that some funds may rely heavily on proxy advisors instead of conducting independent analysis. This heavy reliance on proxy advisors suggests that in some cases, their recommendations are not merely guiding investor judgment but effectively substituting for it.

Beyond influencing shareholder voting, proxy advisors’ influence extends even further – directly shaping corporate decision-making. Companies, anticipating negative proxy advisor recommendations, may preemptively alter governance practices to align with proxy firm preferences, even when these changes may not be optimal for long-term performance.[7] Nowhere is this dynamic more evident than in executive compensation decisions. Larcker et al (2015), cited earlier, reveal firms’ preemptive actions to adjust pay structures in anticipation of negative proxy advisor recommendations – highlighting how proxy advisor influence can shape governance decisions ex ante, not merely through voting outcomes.

Edmans et al (2023) find that 53 % of directors report having offered lower CEO pay than they otherwise would have, specifically to avoid the risk of a negative recommendation from a proxy advisor. Jochem et al (2021) find that proxy advisor influence contributes to reduced variation in CEO pay across firms, potentially limiting firms’ ability to tailor compensation to their specific needs and weakening incentive structures. Cabezon (2025) similarly finds that proxy advisor pressure has led to standardized pay structures, which is associated with lower shareholder value. These findings raise concerns that proxy advisors may not only influence voting outcomes but also act as de facto standard setters, dictating governance policies in a way that constrains managerial discretion and limits firm-specific flexibility.

Institutional investors have a natural incentive to minimize the costs associated with researching and analyzing governance issues for proxy voting. This has created a structure where proxy advisors serve as a complementary service, helping investors navigate complex governance decisions. However, over-reliance on these services becomes problematic when proxy advisor recommendations do not necessarily contribute to shareholder value. Empirical research suggests that there may be issues with the quality of proxy advice, raising concerns about whether their influence aligns with sound corporate governance principles. Given the extent of their impact, further scrutiny is necessary to assess how proxy advice is generated. One such avenue of inquiry is the potential use of artificial intelligence (AI) in formulating recommendations. If proxy advisors rely on AI models that systematically produce biased or erroneous outputs – without sufficient human oversight to correct these flaws – this could lead to distortions in governance practices across the market.

2.3 Conflicts of Interest

A key concern surrounding proxy advisory firms is the potential for conflicts of interest in their business models. ISS, beyond providing proxy voting recommendations, offers consulting services on governance matters – including executive compensation and shareholder engagement – to corporate issuers through a separate subsidiary. This dual role raises concerns that ISS may issue favorable recommendations to retain or attract corporate clients, compromising its objectivity. To address this, ISS has implemented firewalls to separate its proxy advisory and corporate consulting entities (ISS 2023). However, the effectiveness of these safeguards remains unclear, as independent oversight is limited. Law firms and other professional service providers handling sensitive client relationships also use firewalls, but they operate under strict ethical rules and external regulation. In contrast, proxy advisors lack comparable industry-wide standards or oversight, making it difficult to assess whether their firewalls effectively prevent undue influence. Given ISS’s significant role in corporate governance, greater transparency in managing these conflicts is essential to ensure the integrity of its recommendations.

Theoretical and empirical studies highlight the existence and impact of conflicts of interest within proxy advisory firms. Ma and Xiong (2021) employ theoretical modeling to demonstrate that such conflicts can distort voting recommendations and subsequently reduce firm value. Their analysis suggests that when proxy advisors have business relationships with the firms they evaluate, their recommendations may be biased, leading to suboptimal governance outcomes. Empirically, Li (2018) examines voting recommendations and finds that ISS becomes less favorable toward firms that are likely to be corporate clients – proxied by large companies – when Glass Lewis, a competing proxy advisor, initiates coverage of the same firms. The author interprets this behavioral shift as evidence that commercial incentives may compromise the independence of proxy advice.

Beyond these conflicts, Malenko et al (2025) argue that proxy advisors may have economic incentives to create controversy, as closely contested votes increase the perceived value of their recommendations and, consequently, the demand for their services. If proxy advisors systematically frame governance issues in ways that amplify division among shareholders, they may not only introduce bias into voting outcomes but also drive corporate governance changes that serve their business interests rather than those of investors. Hayne and Vance (2019) further suggest that proxy advisors do not merely function as information intermediaries but act as de facto standard setters, as their voting guidelines compel widespread corporate conformity. This creates an environment where firms feel pressured to align with proxy advisor expectations, often prioritizing compliance over governance strategies tailored to their unique circumstances. The combination of agenda-setting power and financial incentives to cultivate controversy raises fundamental questions about whether proxy advisory firms serve as neutral governance analysts or profit-driven entities shaping corporate policy to sustain their influence.

Concerns over conflicts of interest have become more pressing with the increasing reliance on AI in decision-making processes. If proxy advisors utilize AI trained on past recommendations, there is a risk that biases – if present – stemming from business relationships or other incentives could be systematically reinforced. Moreover, AI’s opacity makes it difficult to assess whether recommendations are genuinely objective or whether commercial interests might either advertently or inadvertently shape outcomes. As AI plays an increasingly prominent role in governance-related decision-making, ensuring that its use in proxy advisory services does not introduce or exacerbate conflicts of interest remains a critical issue.

3 Comparative Regulatory Frameworks for Proxy Advisors

Building on the concerns identified in Section 2, this section examines how different jurisdictions have sought to address the structural problems surrounding proxy advisors – namely, opacity, undue influence, and conflicts of interest – through legal and regulatory frameworks. It focuses on the United States, the European Union, and selected Asian jurisdictions (Japan, South Korea, and India), offering a comparative assessment of their respective approaches to the oversight of proxy advisors. Since these frameworks were not specifically designed with AI in mind, the institutional risks associated with AI adoption require separate consideration, which will be addressed in Sections 4 and 5. Understanding the scope and limitations of existing regulations is useful for considering how to regulate AI use by proxy advisors.

3.1 United States

Since proxy advisors are typically retained by institutional investors acting as investment advisers, it is useful to begin with the regulatory framework that governs how investment advisers may use proxy advisors. Historically, regulatory attention by the Securities and Exchange Commission (SEC) has focused first on how investment advisers fulfill their duties when relying on proxy advisors, rather than on proxy advisors as independent subjects of regulation.

3.1.1 Investment Advisor Regulations

In the United States, proxy advisory firms have become an integral component of the corporate governance framework, particularly by supporting institutional investors in fulfilling their proxy voting responsibilities. The growth of this industry parallels the increasing dominance of institutional shareholders, who collectively held approximately 70 percent of publicly traded equity in the United States by 2016.[8] A key regulatory development occurred in 1988 when the U.S. Department of Labor issued interpretive guidance – commonly referred to as the Avon Letter – stating that proxy voting is part of a fiduciary duty under the Employee Retirement Income Security Act (ERISA) (Tuch 2019, p. 1469).[9] This interpretation increased the legal and administrative expectations placed on institutional investors and contributed to growing demand for external proxy voting support.

In 2003, the U.S. SEC further reinforced the fiduciary nature of proxy voting by adopting two complementary rules. First, Rule 206(4)-6 under the Investment Advisers Act of 1940 required investment advisers with voting authority to implement written policies and procedures designed to ensure that proxies are voted in the best interests of their clients, including processes for managing conflicts of interest.[10] The SEC explained that advisers could address such conflicts by relying on the recommendations of an independent third party, such as a proxy advisory firm.[11] Second, the SEC adopted a rule under the Investment Company Act requiring mutual funds and other registered investment companies to disclose both their proxy voting policies and their actual voting records.[12]

Building on these developments, SEC staff in 2004 issued no-action letters to Egan-Jones and Institutional Shareholder Services (ISS), suggesting that investment advisers could rely on proxy advisors as independent third parties for the purpose of managing conflicts of interest, as contemplated under Rule 206(4)-6.[13] A decade later, in 2014, the SEC clarified that advisers nonetheless remain responsible for independently evaluating proxy advice rather than relying on it mechanically.[14] Over time, concerns grew about the adequacy of this interpretive framework, particularly as proxy advisors came to play a more prominent role in the governance process. Reflecting a shift toward a more active regulatory posture, SEC staff withdrew the 2004 letters in 2018 – signaling the Commission’s departure from its prior hands-off approach and setting the stage for a formal reassessment of how proxy voting advice should be regulated.

3.1.2 Proxy Advisor Regulations

For many years, proxy advisory firms operated with minimal regulatory oversight, despite their growing influence over corporate governance and shareholder voting. For example, although the SEC has maintained since at least 2010 that proxy voting advice may constitute a “solicitation” under Section 14 of the Securities Exchange Act of 1934, proxy advisory firms interpreted their activities as exempt from the associated rules (Grabar and Wang 2021). Accordingly, they did not file proxy solicitation materials – such as a proxy statement or other soliciting communications – with the SEC, as would typically be required for those engaging in solicitation.

In 2020, the SEC adopted final rules codifying its interpretation that proxy voting advice by proxy advisors constitutes a “solicitation” under Rule 14a-1(l) of the Securities Exchange Act.[15] The rules also added new conditions to the exemptions in Rules 14a-2(b)(1) and (3), commonly used by proxy advisors to avoid the proxy rules’ filing and information requirements. These conditions, set out in Rule 14a-2(b)(9), require proxy advisors to disclose material conflicts of interest and to establish written policies and procedures to ensure that their advice is provided to the subject company at or before dissemination to clients. They must also offer a mechanism by which clients are notified of any written response the subject company may provide. These pre-review and response mechanisms were intended to give companies an opportunity to review proxy advice before it was provided to investors, and to respond in time for clients to consider both perspectives prior to voting. The rules also amended Rule 14a-9 to clarify that proxy voting advice is subject to the rule’s prohibition on material misstatements or omissions.

In November 2021, the SEC proposed to rescind the pre-review and response requirements, citing concerns about their potential impact on the cost, timeliness, and independence of proxy advice.[16] The requirement to disclose conflicts of interest remained. In supporting the rollback, the SEC pointed to the role of the Best Practice Principles Group (BPPG) – a voluntary industry initiative composed of major proxy advisors such as ISS and Glass Lewis – which promotes transparency and accountability through self-regulation. The SEC viewed these voluntary practices, along with existing market incentives, as sufficient to encourage proxy advisors to engage with subject companies. While proxy voting advice remains classified as a solicitation and subject to Rule 14a-9, the process for issuer engagement is now largely governed by private arrangements between proxy advisors and the companies they evaluate.

Looking ahead, the regulatory treatment of proxy advisors in the United States remains fluid. While proxy voting advice continues to be classified as a form of solicitation, the rollback of the pre-review and response mechanisms has returned much of the engagement process to voluntary practices. Ongoing litigation and diverging views among federal courts suggest a lack of consensus on the appropriate scope of SEC authority in this area (Gillison 2024). As a result, the future direction of regulation may depend not only on the Commission’s policy preferences, but also on judicial developments and evolving industry standards.

3.2 Europe

In Europe, the regulation of proxy advisors has emphasized transparency and disclosure, in contrast to more interventionist approaches such as the short-lived U.S. pre-review regime introduced in 2020. This reflects a broader regulatory philosophy within the European Union that favors proportionality, investor information access, and supervision through transparency rather than prescriptive control. In 2013, ESMA assessed the proxy advisory market and found no clear evidence of market failure warranting legislative intervention, instead recommending that the industry develop a code of conduct to improve practices around transparency and conflicts of interest (ESMA 2013). These findings informed the design of the provision on proxy advisors within the Shareholder Rights Directive II (SRD II).[17] The directive was adopted in 2017 and required to be implemented by all EU member states by June 2019. While some countries experienced delays, the majority of member states had transposed the directive into national law by that date (ESMA and EBA 2023). Article 3j of SRD II, titled “Transparency of proxy advisors,” sets out the primary obligations applicable to proxy advisors.

Under Article 3j(1), they must publicly disclose the code of conduct they apply and explain how it is implemented. Where they do not comply with a code – whether by not applying one or by departing from specific provisions – they must provide a clear and reasoned explanation. This “comply or explain” disclosure must be made freely available and updated annually on their websites. A commonly referenced benchmark is the Best Practice Principles for Shareholder Voting Research (BPP), developed by the BPPG, previously discussed in Section 3.1.2.

In addition, Article 3j(2) mandates proxy advisors to disclose annually a range of information relevant to the preparation of their research, advice, and voting recommendations, so as to adequately inform clients about the accuracy and reliability of their services.[18] This includes the essential features of the methodologies and models they apply, the main sources of information used, and the procedures they have in place to ensure the quality of their outputs as well as the qualifications of the staff involved. Proxy advisors must also indicate whether and how they take into account national market conditions, legal and regulatory frameworks, and company-specific factors. Furthermore, they are required to disclose the key features of the voting policies they apply in each market, whether they engage in dialogue with the companies they evaluate or with company stakeholders – and if so, the extent and nature of such engagement – and their policies for preventing and managing potential conflicts of interest. This information must be made publicly available on their websites, free of charge, and must remain accessible for at least three years from the date of publication.

Finally, Article 3j(3) requires proxy advisors to identify and promptly disclose to their clients any actual or potential conflicts of interest, or business relationships, that could influence the preparation of their research, advice, or voting recommendations, as well as the actions they have undertaken to eliminate, mitigate, or manage such conflicts.

A central component of the EU framework is the mechanism of monitored self-regulation built around the BPP. Developed in 2014 by the BPPG, the BPP were created in response to recommendations issued by ESMA in its 2013 report. In 2019, the BPP were revised to align with the requirements of SRD II, and an Independent Oversight Committee (IOC) was established. The IOC, composed of representatives of investors, issuers, and academics, conducts annual assessments of signatories’ statements and may issue recommendations for improvement.[19]

While the IOC operates within the context of industry self-regulation, institutional oversight at the EU level is provided through ESMA, which combines private and public mechanisms. ESMA plays a coordinating role, including tracking SRD II implementation and evaluating its effectiveness. In its 2023 report, ESMA acknowledged that the current model of monitored self-regulation appears generally robust, but also identified areas where improvements are needed. The report recommended several improvements to the current framework, including greater specificity and clarity in disclosures, particularly regarding data sources such as ESG inputs; the potential for ESMA to play a role in addressing breaches of codes of conduct; and more effective disclosure of conflicts of interest.[20]

A related but distinct approach can be observed in the United Kingdom. The United Kingdom transposed SRD II into domestic law through the Proxy Advisors (Shareholders’ Rights) Regulations 2019,[21] which entered into force prior to Brexit. These regulations impose transparency obligations on proxy advisors similar to those under Article 3j and designate the Financial Conduct Authority (FCA) as the competent authority for oversight. The FCA has the power to investigate non-compliance and impose sanctions where necessary. Following the UK’s departure from the EU, this framework has remained in place without substantive change, and the FCA continues to monitor compliance through public disclosures and supervisory engagement.[22]

In addition to this statutory framework, the UK Stewardship Code 2020 functions as a soft-law instrument that complements SRD II by promoting higher standards of conduct and transparency. The Code operates on an “apply and explain” basis, requiring signatories not only to adhere to its principles but also to explain how they have applied them in practice.[23] For example, service providers such as proxy advisors are expected to explain how they have identified and responded to market-wide and systemic risks (Principle 4), whether and how they have sought clients’ views and the rationale for their chosen approach (Principle 5), and what internal or external assurance they have obtained in relation to stewardship support and why that approach was chosen (Principle 6). Unlike in the United States, however, proxy advisors in the United Kingdom have historically exerted less influence, in part because strong institutional investor trade groups have assumed many of the functions proxy advisors perform elsewhere (Tuch 2019, p. 1488).

3.3 Selected Asian Jurisdictions

Unlike the European Union, which has adopted a harmonized approach through directives such as SRD II, Asia lacks a unified regulatory framework governing proxy advisors. Regulatory approaches differ substantially across jurisdictions, shaped by each country’s institutional environment and market dynamics. Given this diversity, this section does not attempt to provide a comprehensive survey. Instead, it focuses on three jurisdictions – Japan, South Korea, and India – that illustrate contrasting regulatory strategies toward proxy advisory firms and highlight distinct challenges in ensuring transparency and accountability.

3.3.1 Japan

Japan does not impose binding rules on proxy advisory firms, but their conduct is shaped by the country’s Stewardship Code, which was introduced in 2014 and revised in 2017 and 2020.[24] The Code is a soft-law instrument, but it sets expectations for institutional investors and related actors to enhance corporate governance through responsible engagement.[25] It follows a “comply or explain” approach, under which signatories must either adhere to its principles or publicly explain any deviations. Although not legally binding, the Code plays a significant role in guiding the practices of proxy advisors within Japan’s corporate governance framework.

The 2017 revision of the Code introduced Guideline 5–5, which for the first time referred specifically to proxy advisory firms. It stated that proxy advisors should allocate sufficient management resources to accurately understand the circumstances of the companies they evaluate. It also emphasized that proxy advisors should be mindful that the principles and guidelines of the Code may be applicable to them, and should provide their services accordingly. Furthermore, proxy advisors are expected to publicly disclose their efforts concerning organizational structure, conflict-of-interest management, and the processes through which their recommendations are formulated.

The 2020 revision expanded upon this guidance by incorporating it into a new set of detailed provisions under Principle 8. Principle 8–1 requires proxy advisors to identify specific circumstances that may give rise to conflicts of interest, to adopt clear policies for managing them, to establish appropriate internal structures, and to disclose these measures. Principle 8-2 calls for the development of sufficient human and operational resources – including the establishment of a business presence in Japan – to ensure that recommendations reflect accurate, company-specific information. It also requires proxy advisors to disclose their recommendation processes in detail to improve transparency. Principle 8–3 further states that, in addition to relying on corporate disclosures, proxy advisors should actively engage with companies when necessary. If requested by a company, proxy advisors are expected to provide an opportunity for the company to verify the accuracy of the information underlying the recommendation and to transmit the company’s written opinion to their clients along with the recommendation.

In recent years, concerns have emerged that Japan’s Stewardship Code has become hollow in practice, with institutional investors criticized for engaging in superficial compliance and falling short of the accountability expectations set out in the Code. In response, the Financial Services Agency (FSA) has made the effective implementation of the Code a key priority as part of its broader reassessment of financial supervision. In its 2024 administrative policy, the FSA announced plans not only to revise the Code itself but also to evaluate how institutional investors and proxy advisory firms are complying with its principles.[26]

With respect to proxy advisors in particular, observers have pointed out that some firms, despite being signatories to the Code, lack sufficient organizational capacity and conduct only perfunctory engagement with companies, leading to standardized recommendations that fail to reflect firm-specific circumstances.[27] Many companies have expressed concern that they are unable to engage in meaningful dialogue with proxy advisors and that this lack of communication may result in biased or uninformed assessments.[28] Given the significant influence of proxy recommendations on shareholder voting outcomes, a growing number of companies believe that such recommendations must be grounded in appropriate engagement.[29]

Against this backdrop, Japan’s business community has urged the FSA to take a more active intermediary role – for example, by establishing a contact point on this matter for companies and encouraging proxy advisors to respond more constructively to engagement requests.[30] In light of these developments, the FSA has begun reviewing the operational practices of proxy advisory firms and is considering further measures where necessary.

3.3.2 South Korea

Proxy advisory firms have emerged as important intermediaries in South Korea’s corporate governance landscape, yet discussions regarding their regulation remain underdeveloped compared to the United States and the European Union. The Korean proxy advisory market is dominated by three domestic firms: the Korea Institute of Corporate Governance and Sustainability (KCGS), the Korea ESG Research Institute (KRESG), and Sustinvest.[31] These firms primarily provide voting recommendations to domestic institutional investors. In contrast to foreign investors, who rely heavily on ISS and Glass Lewis, Korean investors tend to depend more on local proxy advisors.[32]

Korean proxy advisory firms exhibit structural vulnerabilities, including limited disclosure regarding their recommendation methodologies and conflict-of-interest management, as well as persistent concerns about the substantive quality of their advice. Song (2018) notes that these firms remain institutionally immature, and that the rationale and evaluative standards underlying their recommendations are difficult for outsiders to assess or verify.

There is currently no statutory framework in South Korea that directly regulates proxy advisory firms; their activities are instead subject to market-based mechanisms. The Korean Stewardship Code outlines principles for responsible voting by institutional investors, including guidance on the use of proxy advisors, to whom the Code also applies. Specifically, it emphasizes that investors should make voting decisions based on their own analysis, grounded in sufficient information gathering and engagement with investee companies, and that reliance on proxy advisor recommendations does not exempt investors from ultimate responsibility (Principle 5 Guideline).[33]

With regard to proxy advisors themselves, the Code’s Guide Book – quasi-official and authoritative – suggests that, as entities supporting institutional investors’ stewardship activities, they are expected to enhance their expertise, fairness, and accuracy.[34] It is considered desirable that proxy advisors establish and disclose stewardship policies and faithfully implement the Code’s seven principles. The Guide Book also emphasizes the importance of close coordination with institutional clients.[35] In the absence of legal mandates, these guidelines function as de facto behavioral norms aimed at improving transparency and accountability in the proxy advisory sector.

In recent years, the Korean government has shown interest in enhancing oversight of proxy advisory firms. In 2024, the Financial Services Commission considered developing regulatory guidelines for the sector and held discussions with industry stakeholders.[36] However, the initiative was ultimately suspended following a legal determination that proxy advisory firms do not fall within the scope of entities subject to regulation under the capital markets laws.[37] As a result, oversight of proxy advisors continues to rely on non-legally binding mechanisms, such as the Korea Stewardship Code.

3.3.3 India

In India, the Satyam scandal in 2009 prompted a series of corporate governance reforms, culminating in the enactment of the Companies Act, 2013.[38] The Satyam incident, which involved massive accounting fraud by the management, sent shockwaves through domestic and international investors, triggering increased attention to the effectiveness of corporate governance mechanisms.[39] In 2010, the Securities and Exchange Board of India (SEBI) issued a circular requiring mutual funds to disclose their voting behavior, thereby promoting transparency in the proxy voting practices of institutional investors.[40] Against this regulatory backdrop, India witnessed the emergence of the proxy advisory industry around 2010, with prominent firms such as Ingovern, Institutional Investor Advisory Services India Limited, and Stakeholders Empowerment Services beginning to gain visibility.[41] Since then, they have increasingly influenced the voting patterns of institutional investors and the corporate governance practices of listed companies.[42]

Currently, proxy advisors in India are regulated under the SEBI (Research Analysts) Regulations, 2014 and the Procedural Guidelines for Proxy Advisors.[43] These frameworks address matters such as registration, eligibility norms, disclosure of methodologies, and the management of conflicts of interest.[44] In addition, the Procedural Guidelines require proxy advisors to: share their reports with both clients and the company simultaneously; define the timeframe within which the company may provide comments; and circulate any such comments as addenda, along with any revised recommendations or explanatory notes. Proxy advisors must also alert their clients to any factual errors or material revisions within 24 h of receiving such information.[45] However, in contrast to investment advisors, the qualification and capital adequacy requirements for proxy advisors remain relatively lenient, raising concerns about their professional expertise and the reliability of their recommendations.[46]

Proxy advisors’ influence has been evident in major corporate decisions. For instance, in 2020, Vedanta Limited’s delisting proposal failed, and in 2024, Nestlé India’s plan to increase royalty payments to its parent company was rejected – both following negative recommendations from proxy advisors.[47] By contrast, in the case of ITC’s corporate demerger proposal, proxy advisors offered divergent views, and the resolution was ultimately approved by a majority of shareholders.[48]

Concerns have also been raised regarding the transparency of analytical methods and the quality of engagement with companies. In the case of Zee Entertainment, certain proxy advisors recommended voting against the appointment of independent directors based on alleged conflicts of interest and pending criminal charges.[49] The company countered these recommendations as inaccurate and misleading, and the appointments were eventually approved.[50] Moreover, when errors are identified in proxy reports, corrections are typically issued through addenda rather than revised reports, which may allow outdated or erroneous information to persist as the primary reference for investors.[51]

With regard to conflict-of-interest management, proxy advisors are required to disclose relationships stemming from other services such as corporate governance and ESG advisory.[52] However, such disclosures are generally confined to proxy reports and are not always reflected on publicly accessible websites where voting recommendations are posted.[53]

As for grievance redressal, SEBI established a mechanism in 2020 allowing listed companies to lodge complaints against proxy advisors.[54] Nonetheless, there are no mandated timelines for processing such grievances, nor any obligation to publish the basis or outcome of SEBI’s decisions.[55]

Overall, India’s regulatory framework for proxy advisors appears more structured than those of Japan or South Korea in formal terms. The existence of SEBI-administered registration requirements, codified conduct rules, structured engagement processes, and a formal grievance mechanism indicate a comparatively robust legal infrastructure. However, similar to the Japanese and Korean contexts, concerns remain about the effectiveness of implementation. Key challenges include the absence of stringent qualification standards, limited disclosure of analyst credentials, and questions surrounding the depth and independence of proxy research. Future regulatory developments will need to focus on enhancing supervisory oversight and improving the transparency and quality of proxy advisory services in practice.

4 Evolving Role of Artificial Intelligence in Proxy Advisory Services: From Data Processing to Recommendation Drafting

The preceding sections have outlined the key concerns surrounding proxy advisors – opacity, undue influence, and conflicts of interest – and examined how different jurisdictions have attempted to address these issues through legal and regulatory frameworks. Yet those frameworks were developed without anticipating the distinctive challenges posed by AI. As proxy advisors begin to integrate AI into their operations, it becomes necessary to examine how such technologies may interact with, reinforce, or alter existing institutional risks. This section explores the current and potential uses of AI in proxy advisory services, with a focus on how they may reshape the structure of voting advice and challenge traditional mechanisms of transparency and accountability.

4.1 Current and Emerging Use of AI in Proxy Advisory Services

Although no systematic data exists on the deployment of AI in the proxy advisory industry, ESMA identified certain patterns in its 2022 investigation.[56] This research, conducted between April and November of that year, drew on interviews and voluntary questionnaire responses from a range of financial market participants, including proxy advisors.[57] In addition, ESMA organized dedicated workshops that brought together industry experts, academics, regulators, and representatives of international organizations to collect use cases and assess perceived risks.[58]

Based on the information collected through these multiple channels, ESMA noted that some proxy advisory firms use AI to gather, synthesize, and process the information that underpins their research and voting recommendations.[59] According to ESMA, the development of such tools is particularly driven by increasing demand for ESG-related analysis from institutional investors and other stakeholders.[60] ESMA refers to techniques such as web scraping – the automated collection of information from publicly available sources – and natural language processing (NLP), which is used to extract meaning and structure from text data, as methods that can be used to generate ESG assessments.[61] These methods could, for instance, enable the extraction and classification of information on emission targets, labor practices, or board independence from CSR reports and integrated disclosures, allowing for the construction of ESG scores or risk profiles.

Indeed, ISS and Glass Lewis respectively offer ESG evaluation schemes known as the “ESG Corporate Rating” and the “ESG Profile,” which are designed to assess a company’s performance in ESG areas and support investor decision-making, including proxy voting and engagement strategies.[62] These evaluations cover corporate ESG policies, outcomes, and disclosure quality, providing a structured framework for assessing company practices through both quantitative and qualitative indicators.[63] Both firms base their assessments primarily on publicly available sources such as proxy statements, annual reports, sustainability disclosures, and corporate websites.[64] Where appropriate, they may also refer to external databases and international ESG guidelines.[65] While the specific methods of data collection and analysis vary between providers, both ultimately aim to structure this information into evaluative data points aligned with their respective scoring criteria.

Returning to ESMA’s findings, it noted that the use of AI appears to remain confined to the analytical and preparatory stages and does not currently extend to the formulation of final voting recommendations.[66] According to ESMA, the proxy advisors interviewed stated that AI does not “directly or autonomously” contribute to the generation of voting advice.[67] However, as these statements were self-reported, the actual role of AI in internal decision-making processes cannot be externally verified. Under Article 3j of the amended Shareholder Rights Directive (SRD II) and the Best Practice Principles (BPP), proxy advisors operating within the EU are subject to certain transparency requirements. Nevertheless, leading firms such as ISS and Glass Lewis provide little or no public disclosure on their use of AI systems, particularly regarding their technical configurations and governance structures. This lack of transparency hampers the ability of investors and regulators to evaluate the potential risks or influence that AI may exert over proxy voting advice.

In contrast to this limited disclosure, recent research on large language models (LLMs) has demonstrated that generative AI systems are technically capable of processing and synthesizing diverse types of financial data to support advisory functions and decision-making processes (Nie et al. 2024). While the use cases identified in such studies primarily concern investment strategies and portfolio management, the underlying capabilities – namely, the ability to interpret financial documents, identify relevant variables, and generate text-based recommendations – are transferable to proxy advisory workflows. Applied to the proxy context, these models could be trained to produce draft recommendations or preliminary assessments of voting positions based on inputs such as company governance structures, historical voting patterns, and the content of shareholder proposals. As such systems become embedded in proxy advisory workflows, human discretion may gradually be confined to reviewing and approving machine-generated outputs. To address this shift, it will be essential to enhance transparency requirements for AI usage by proxy advisors and to reinforce supervisory mechanisms capable of monitoring such integration.

4.2 Declining Transparency in Proxy Advisory Processes due to AI Integration

Regardless of how AI is currently used, its implications for institutional design cannot be overlooked. As discussed in Section 4.1, ESMA found that proxy advisors have thus far limited AI use to supporting tasks such as data extraction and processing. However, even at this stage, it is exceedingly difficult for external observers to determine what data is collected, how it is interpreted, and how it is structured. While human judgment has never been entirely transparent, the integration of AI introduces additional technical layers – such as data parsing, model inference, and parameter tuning – that render the processing pipeline significantly more opaque. As a result, structural information asymmetry deepens, making external scrutiny increasingly difficult. This lack of traceability raises systemic risks that misinterpretations or ambiguous judgments may become embedded in the foundational layers of proxy advice.

As AI systems continue to evolve, their design and implementation could further reduce the transparency of the end-to-end information-processing and decision-making architecture within proxy advisors. Therefore, even if AI’s current role is characterized as merely supportive, this should not be used as grounds to dismiss its institutional significance, particularly given its potential to reshape how information is processed, structured, and ultimately translated into advisory judgments.

This concern is concretely illustrated by the relationship between ESG scores and voting recommendations provided by Glass Lewis and ISS, where ESG scoring frameworks initially positioned as ancillary have become functionally intertwined with advisory outputs. Both firms officially maintain a separation between ESG ratings and voting advice. Nonetheless, Glass Lewis has acknowledged that certain evaluation factors may influence its voting guidance (Glass Lewis 2025, p. 11). In the case of ISS, while ESG scores are not directly applied to its benchmark voting policies (ISS 2025, p. 4), the quantitative and qualitative indicators comprising those scores are structured in a way that enables their integration into advisory services and client decisions. As a result, in both cases, the scoring framework appears to be functionally interoperable with proxy voting operations. Given this overlap, it is essential to ensure transparency in how ESG scores are constructed. While both ISS and Glass Lewis disclose the overall scoring categories and weighting schemes, they offer limited explanation of how individual indicators are aggregated to produce final scores. Consequently, from an external perspective, the reproducibility of the scoring process remains insufficiently secured.

This concern becomes even more serious if AI is eventually embedded not only in scoring but also in the generation of voting recommendations. If the logic of scoring and the criteria for recommendation are internalized within algorithmic models, the institutional feasibility of verifying the consistency and validity of outputs could effectively collapse. Even if human oversight is formally preserved, the actual decision-making could be shaped by architectural constraints and training data, undermining both transparency and accountability at the structural level.

As the preceding discussion illustrates, the use of AI by proxy advisors can significantly erode transparency. This broader concern can be generalized into three analytical dimensions.

First, AI diminishes institutional visibility into the processes that underpin proxy advice. Although proxy advisors have long operated with limited transparency, AI introduces multi-layered, technically complex workflows that make such opacity more entrenched. The classification and scoring conducted by AI systems are based on inputs and processing steps that are not easily interpretable from public disclosures.[68] While general methodological information is disclosed, the detailed criteria and interpretive logic underlying individual voting recommendations are not fully disclosed, limiting the ability of companies and investors to trace or verify the basis for specific outputs.

Second, AI’s pattern recognition is based on historical data and tends to struggle with firm-specific contexts or non-standard situations. While ISS and Glass Lewis both indicate that their evaluation methods allow for customization based on factors such as industry, geography, and governance structures,[69] the extent to which such adjustments influence final recommendations remains opaque. Where AI is involved, contextual variables are typically processed through predefined input structures or abstracted into generalized representations. As a result, the nuance of complex institutional realities may be lost or simplified – even in advanced models designed to capture contextual relationships – especially when those models are ultimately optimized to produce standardized outputs.[70] Human analysts can flexibly adapt interpretive frameworks when faced with unexpected circumstances; in contrast, AI remains constrained by its training data and model architecture, making real-time re-evaluation structurally difficult.[71] Even if formal flexibility is maintained, the institutional entrenchment of AI-based systems could narrow the practical space for discretionary, context-sensitive judgment.

Third, when the results of AI-driven preprocessing become the foundation for drafting voting advice, the autonomy of investor judgment becomes formalized and potentially hollowed out. If AI-generated summaries and classifications are received as “objective” or “neutral,” investors may mechanically follow them, reinforcing a pattern known as robo-voting. Even if humans are formally responsible for final decisions, the entrenchment of AI outputs may effectively sideline critical thinking and diminish opportunities for dialogue with firms.

This concern is compounded by the limited human capacity available to critically reassess or contextualize voting recommendations at scale. As of 2025, ISS reportedly employs approximately 3,200 staff across 25 locations in 15 countries, covering 74,500 shareholder meetings across more than 100 markets.[72] While these figures include non-research personnel, the number of analysts directly responsible for voting advice remains undisclosed. ISS serves approximately 4,200 institutional clients and applies more than 380 custom voting policies.[73] Glass Lewis, by comparison, analyzes over 30,000 shareholder meetings annually across more than 100 markets, serving more than 1,300 institutional investors globally.[74] Like ISS, it does not disclose the number of research analysts involved in formulating its voting recommendations. In contrast, Moody’s – a global credit rating agency – employed approximately 14,000 personnel as of 2025,[75] of whom 1,670 were credit analysts and 256 were credit analyst supervisors as of 2021.[76]

While differences in organizational function and product complexity make direct comparison difficult, the scale of Moody’s analytical staffing provides a useful point of reference. Considering the volume of meetings covered and the number of clients served, the relative size of ISS and Glass Lewis raises legitimate questions about whether they possess sufficient human capacity to meaningfully reassess or contextually interpret AI-assisted outputs. This capacity gap, if present, increases the risk that algorithmic assessments will serve as the de facto foundation for proxy advice.

Even in the absence of full autonomy, AI systems are already deeply embedded in the advisory workflow through preprocessing functions – such as extraction, classification, summarization, and evaluation – which have gradually expanded their role over time to serve as structural prerequisites for judgment formation, and are increasingly shaping the contours of human decision-making. If such preprocessing were to be systematically and continuously integrated into recommendation drafting, the AI-driven processing structure could become the routine starting point for advice formation. This would reduce human intervention to mere confirmation, allowing AI outputs to function as independent, institutionally embedded determinants of proxy advice. In that scenario, the ability of companies and investors to externally verify or reassess the basis of voting recommendations would become even more restricted, weakening the institutional foundations of accountability.

4.3 Convergence and Divergence in Proxy Advice under AI Adoption: Implications for Undue Influence

As discussed in Section 2.2, proxy advisors exert substantial influence over institutional investors’ voting behavior. In particular, negative recommendations by ISS or Glass Lewis have been empirically shown to significantly reduce support rates. This influence is problematic not merely because it is concentrated, but because it does not necessarily align with corporate value creation or the interests of investors. This subsection examines how the structure of undue influence might change with the introduction of AI.

AI may alter this structure in two opposing ways. One possibility is the emergence of “convergence,” in which a single proxy advisor tends to produce similar recommendations for different companies. A structurally similar pattern has been indirectly suggested in ESG rating models. Between 2015 and 2021, the average ESG scores assigned by a major rating agency to Russell 1000 companies increased by approximately 18 %. However, a detailed attribution analysis by D.E. Shaw (2022) indicates that roughly one-third of this increase can be explained by structural changes in the rating system. These include shifts in index composition, reweighting of key evaluation items, and increased disclosure by firms, particularly of environmental data. After adjusting for these factors, a residual score increase of about 12 % remains, which may partly reflect changes in firm behavior but cannot be definitively attributed to them.

While it is unclear whether this rating model employed AI, it appears that rule-based scoring methods predominated during the period in question. The fact that the model change coincided with a broad-based score increase, despite heterogeneous corporate behavior, may suggest – though not necessarily prove – that the model was insufficiently sensitive to firm-specific variation. If so, it raises concerns that evaluation systems relying heavily on standardized formal inputs may systematically produce more uniform outputs.

If AI is introduced into the proxy advice generation process, multiple implementation approaches are conceivable. Yet regardless of approach, unless the model is explicitly designed to handle exceptional contexts and unstructured information, there is a risk that diverse company situations will be stripped of context and that similar inputs will produce overly uniform recommendations as a matter of institutional routine.[77]

Conversely, AI may also exacerbate “divergence” in proxy advice – that is, situations in which different proxy advisors issue markedly different recommendations on the same proposal. A well-documented manifestation of such divergence appears in ESG ratings. According to Prall (2021), correlations among major ESG ratings providers range from as high as 0.65 (between S&P Global and Sustainalytics) to as low as 0.07 (between ISS and CDP), indicating a lack of consistency in evaluating overall ESG performance. Dimson et al (2020) further demonstrate that such discrepancies persist at the subcomponent level: for example, the correlations between MSCI and Sustainalytics are only 0.11 for environmental, 0.18 for social, and −0.02 for governance scores. These differences stem from divergent choices in the scope of what is measured, methods of measurement, and weighting schemes. Larcker et al. (2022), referencing these studies, argue that such structural inconsistencies impair the reliability of ESG ratings, confuse investment decisions and fund managers’ communications about ESG quality to investors, and ultimately weaken incentives for companies to improve their ESG practices.

This type of divergence may also arise when AI is applied to proxy advice generation more broadly, beyond ESG scoring. AI-generated proxy advice may process a wide range of information beyond ESG, but if different proxy advisors adopt divergent sets of evaluation criteria and processing methods, their recommendations on the same proposal may differ significantly as a matter of institutional output. Importantly, the mere existence of diversity in evaluation frameworks is not inherently problematic; on the contrary, such pluralism may enhance the robustness of perspectives in the market. The problem arises when the source of such divergence is opaque to external observers, and when outputs differ without sufficient explanation of their underlying logic. The key issue is not whether divergence exists, but whether it is explainable, and to what extent evaluators can disclose and justify the basis for their judgments. If AI accelerates automation while weakening such accountability, institutional transparency may be undermined, creating the risk of confusion and mistrust among investors, firms, and regulators.

4.4 Institutional Entrenchment of Conflicts of Interest through AI

As discussed in Section 2.3, proxy advisors face a dual conflict of interest stemming from the simultaneous provision of consulting and voting recommendation services: (1) an incentive to issue favorable recommendations to client companies, and (2) an incentive to intentionally provoke controversy in order to increase the influence and market value of their advice. The introduction of AI may exacerbate these issues by making them more opaque and institutionally entrenched.

First, with regard to issue (1) – the incentive to issue favorable recommendations to client companies – AI has the potential to produce particularly serious institutional consequences.[78] There are two primary pathways through which AI becomes embedded in this structure. The first is when a proxy advisor intentionally provides favorable recommendations to a specific client company and deliberately supplies the AI model with training data containing both the consulting content and the recommendation outcome. In this case, the AI effectively inherits a policy of “favoring the client under certain conditions” and functions to reproduce and legitimize that stance.

The second pathway arises even when the advisory division has not intentionally made preferential judgments. If information related to a consulting relationship with the client company remains in internal documents and is included in the AI’s training data, the model may statistically learn a favorable bias toward that company. Even if a formal firewall exists between the advisory and consulting divisions, institutional pathways for data transmission persist – such as (1) consulting materials being shared internally as “good governance cases” or “benchmarks” and reused in training data for recommendations, or (2) personnel rotations between divisions, where individuals previously involved with the client in a consulting capacity later participate in training design or evaluation for the advisory AI. Through such channels, information can cross divisional boundaries and become embedded in the system.

Recent studies have statistically demonstrated that AI models, even when they appear to produce logically correct outputs, often rely not on semantic understanding but on token-level co-occurrence.[79] For example, experiments by Jiang et al. (2024) show that simply modifying the input expression – without altering the underlying logical structure of a task – can significantly shift the output of large language models (LLMs), suggesting that decisions are driven by “distributions of symbols” rather than “meaning.”[80] Tokens, in this context, refer to the smallest units processed by the model, including words, punctuation marks, symbols, and proper nouns.

Drawing on this insight, if during fine-tuning (the process of adding new data to a pre-trained model for additional adjustment), certain company documents or proposals are repeatedly introduced as “desirable precedents,” the model may assign statistical weight to patterns where a specific company name, related terms, and formalistic descriptions co-occur with favorable recommendations, ultimately institutionalizing these as legitimate evaluative criteria. While the model’s surface-level output may appear neutral, beneath it lies a distribution of tokens – including company names and associated language – that mechanically co-occurs with favorable judgments and becomes embedded in the AI system as an implicit standard for future recommendations. In this way, AI outputs may continuously reproduce recommendation patterns constructed through consulting relationships with specific clients – unbeknownst to users – while remaining opaque and resistant to correction, and still being treated as legitimate decisions.

Next, regarding issue (2) – the incentive for proxy advisors to intentionally generate controversy to increase the perceived influence and market value of their advice – the introduction of AI also risks institutionally reproducing this structure.[81] For proxy advisors, when a voting recommendation influences the outcome of a contentious shareholder proposal, that instance is internally valued as symbolic evidence of the advisory service’s effectiveness. What is evaluated is not the substantive economic rationality of the proposal or its contribution to shareholder value, but rather the magnitude of the advisor’s influence on the voting result. As a result, even proposals of relatively minor substantive importance that trigger shareholder concern or debate and lead to a changed outcome through the advisor’s recommendation are likely to be documented and regarded internally as achievements of economic value to the advisor. When such records and documents are incorporated into AI training data, the model learns common patterns from proposals that provoked controversy and significantly affected outcomes – such as the topic of the proposal, the company’s response, and the structure of the recommendation rationale – and may generate future recommendations that reproduce similar effects.

The recommendation patterns constructed in this way are likely to be presented as outputs consistent with past decisions, making them externally appear to be neutral and legitimate AI judgments that are readily accepted within institutional settings. However, what the model is actually learning are the formats and argumentative structures of recommendations that provoked controversy and influenced voting outcomes – patterns that do not necessarily align with shareholder value. If the advisor recognizes that its own recommendations have swayed voting outcomes and generated economic value for the firm, and then seeks to intentionally replicate such patterns, this leads to a clearly structured process of institutional reproduction aimed at maintaining or expanding influence.

Even absent such intent, if past recommendations that were once influential and deemed rational from the standpoint of shareholder value are included in the training data as “high-utility training examples,” the model may statistically assign disproportionate weight to features that commonly appeared in such cases – even if those features are unrelated to substantive shareholder value. As a result, the model may develop a tendency to generate judgments that are influential but misaligned with shareholder interests. These outputs, while formally consistent with past decisions and institutionally validated as coherent, may in fact perpetuate a recommendation structure that is increasingly optimized to serve the advisor’s own influence.

4.5 Transformation of Accountability Structures and Institutional Gaps

The integration of AI into the process of generating proxy advice not only amplifies existing concerns – such as lack of transparency, undue influence, and conflicts of interest – but also gives rise to a structural problem that did not arise under pre-AI conditions: institutional ambiguity in the allocation of accountability.

Even prior to the use of AI, proxy voting advice had already exhibited a certain degree of opacity in its reasoning processes. For example, the decision-making procedures and standards employed by proxy advisors have not been readily subject to external scrutiny, and accountability has often been only nominally maintained. As noted by Larcker and Tayan (2024), the institutional foundations for ensuring accountability in human-generated recommendations have long been fragile.

Despite such fragility at the institutional level, traditional proxy advice operated under the expectation – however unfounded – that some basic form of internal accountability was possible. Recommendations were generally produced by staff applying internal policies to case-specific facts, and although individual authorship was not externally visible, it was presumed – perhaps optimistically – that the organization could internally reconstruct how and by whom decisions were made. This assumption of traceability, though rarely verified, has served as a minimal basis for claims of institutional legitimacy.

When AI models are used to generate proxy advice, it is theoretically possible to trace the data and logic behind the output and to identify who was involved at each stage. However, without appropriate institutional design, such traceability mechanisms may not be implemented in practice. Even when some form of human oversight is present, institutional accountability remains difficult to establish if it is unclear how substantive the review or approval was, or who within the organization assumed those responsibilities. This risk is particularly salient in cases where proxy advice is generated through editorial processes that involve multiple analysts or specialized teams. In such settings, procedural accountability can become difficult to establish, especially if the division of responsibilities is not clearly defined, or if there is no clear record of how decisions were made, by whom, and in what order.

This structural uncertainty, if left unaddressed, may also undermine the effectiveness of institutional safeguards that allow companies to respond to or challenge proxy advisor recommendations. Even if the advice generated by AI appears internally consistent and operationally efficient, the absence of a clearly defined framework of accountability behind the decision may deprive issuers of any meaningful opportunity to contest its basis, question its reasoning, or demand corrective explanation.

5 Institutional Requirements and Effective Enforcement Mechanisms

Despite the concerns raised in Section 4, AI presents institutional advantages that make it a potentially valuable tool in proxy advisory services – particularly in terms of processing efficiency and consistency in evaluative judgments, if appropriately used. From the standpoint of efficiency, AI is capable of processing large volumes of unstructured data in a short amount of time, enabling the rapid comparison of complex proposals or the summarization of legal documents – tasks that would otherwise require substantial human labor.[82] From the standpoint of consistency, the pre-formulation of evaluation logics can help reduce variability arising from individual judgment, allowing for more impartial assessments and mitigating the risk of oversight or arbitrariness in specific recommendations.[83]

While Section 4 has raised concerns about convergence – that is, the tendency of a single advisor to produce overly uniform recommendations across different firms – this differs from the notion of consistency discussed here. The consistency emphasized in this section refers not to the similarity of outputs across diverse cases, but to the reduction of arbitrary discrepancies within the evaluation process itself. In other words, consistency in this sense supports procedural fairness by applying stable and transparent criteria to comparable cases, without precluding the capacity to distinguish among context-specific differences.

Accordingly, the question is not whether AI should be rejected as such, but rather how its characteristics – namely its efficiency in data processing and its capacity to standardize judgment – can be institutionally harnessed.[84] To this end, four requirements should be emphasized: (1) ensuring transparency in the processes of data collection, analysis, and output generation; (2) establishing institutional mechanisms that ensure accountability and allow plausibility checks for AI-generated outputs, even when trade secrets limit the detailed disclosure of model structure or logic; (3) designing and operating AI models in ways that do not reproduce structural conflicts of interest embedded in past recommendations or client relationships; and (4) clearly institutionalizing responsibility for human or organizational actors involved in recommendation formation. When these conditions are met, AI may be justifiably integrated as a core component of proxy advisory functions.[85]

A suggestive example of institutional requirements for AI deployment can be found in the ESG Rating Regulation recently adopted in the European Union.[86] This regulation applies to ESG rating providers that issue ESG ratings on companies or financial products, either publicly or through contractual distribution to financial market participants within the Union.[87] Since ISS and Glass Lewis respectively offer services such as the “ESG Corporate Rating” and the “ESG Profile,” which assess corporate ESG performance using quantitative and qualitative indicators for institutional investors, both are potentially subject to this regulatory framework.

Articles 23 and 24 of the EU ESG Rating Regulation require ESG rating providers to disclose information on the methodologies, models, and key rating assumptions they use, including whether and how artificial intelligence is employed in their data collection or rating process.[88] Public disclosures must include references to the use of AI and any related risks or limitations, while disclosures to users must, where applicable, include an explanation of the AI methodologies used. Draft regulatory technical standards issued by ESMA provide further detail with respect to public disclosures under Article 23, requiring ESG rating providers to specify the risks and limitations associated with each type of artificial intelligence technology they employ and their use in data collection and the rating process.[89] Complementing these provisions, Article 25 sets out conflict-of-interest requirements to ensure the independence of ESG rating providers.

These requirements reflect a clear institutional effort to ensure the explainability of AI systems in the evaluation of governance-related information. The regulatory framework thus establishes a principle: when institutions rely on AI-driven information processing to form judgments, they must ensure transparency, accountability, and independence. Such principles are not limited to ESG ratings, but apply more broadly to AI-assisted decision-support functions, including proxy advisory services. In this light, the ESG Rating Regulation should be understood as an illustrative example of how institutional requirements for AI governance may be operationalized. The same set of obligations should, in principle, be extended to any advisory activity in which AI is embedded in the judgment-generation process.

That said, in order for such institutional requirements to function effectively in practice, it is necessary to introduce complementary measures that are tailored to the specific characteristics of AI as a technology. In particular, when the process of judgment formation is automated through algorithms, the internal structure of that process tends to resist external observation, making it difficult to ensure the transparency and validity of the logic leading to a given output through institutional means. Even if disclosures are made regarding model structures or input data, it remains technically and institutionally difficult for third parties to assess, after the fact, whether a particular recommendation was based on specific design preferences or embedded biases. This is because such logic, in the context of machine learning, often takes the form of statistically derived functions rather than explicit human-readable rules. The black-box nature inherent in AI systems suggests that institutional verification should involve complementary mechanisms between the organizations that develop and deploy such systems, and external oversight or disclosure.

Given these structural constraints, the establishment of whistleblower systems becomes an essential institutional measure for detecting and correcting misconduct or bias in AI-driven judgment formation at an early stage. This is especially true considering that arbitrary decisions or organizational pressure can intervene in the initial phases of model development, such as the selection of training data or the setting of weighting parameters. Insofar as institutional enforcement faces inherent limitations when relying solely on external oversight, internally sourced information can serve as a critical complementary detection channel.

An instructive case in this context is the AI Whistleblower Protection Act, introduced in the U.S. Senate in 2024.[90] The bill explicitly provides protection from retaliation for employees or contractors who report violations of AI-related laws or substantial threats to public safety, health, or national security arising from AI systems, whether to regulatory authorities, Congress, or appropriate internal personnel. Notably, such protection extends not only to reports made to external bodies, but also to those directed at supervisors or internal compliance officers (Sec. 3(a)(3)). The statute also guarantees strong remedial measures for whistleblowers, including reinstatement, double back pay, and compensation for legal costs (Sec. 3(b)(3)). In this respect, the bill presents a forward-looking institutional framework that seeks to internalize oversight mechanisms in response to risks specific to AI.

In designing whistleblower systems, it is necessary to ensure anonymous reporting channels, the presence of independent investigative bodies, and effective anti-retaliation safeguards.[91] In addition, monetary incentives for reports to regulatory authorities should be considered as a potential component.[92] Whistleblowing for public purposes often entails serious career risks for the individual, and institutional compensation for those who voluntarily assume such risks is justifiable.[93] Moreover, since such disclosures are frequently not economically rational, the provision of monetary rewards may serve as a reasonable mechanism to promote socially beneficial reporting.[94] Still, several major criticisms have been raised against providing monetary rewards for external whistleblowing to regulatory authorities, including the potential to encourage false reports and to discourage internal reporting within firms (Financial Conduct Authority and the Bank of England Prudential Regulation Authority 2014). However, such concerns do not always hold and can be addressed through appropriate reward scheme design (Iwasaki 2018, 2025; Nyreröd and Spagnolo 2021).[95]

In this way, whistleblower systems should no longer be viewed as optional supplements but instead be recognized as a core institutional infrastructure for ensuring the effectiveness of enforcement in the age of AI. This imperative applies not only to ESG evaluations but also to the broader category of AI-based advisory services, where similar regulatory challenges are present.

6 Conclusions

This paper began with a review of the literature concerning the challenges posed by proxy advisors’ voting recommendations. It then surveyed the regulatory approaches to proxy advisory firms in the United States, the European Union, and Asia. Building on this foundation, the paper examined the current use and future prospects of AI in proxy advisory services, analyzing the institutional risks that may arise from such deployment. While none of the jurisdictions studied has yet enacted regulations specifically targeting the use of AI in proxy advice formulation, the analysis suggests that such regulatory needs are already emerging. Finally, the paper proposed potential institutional safeguards.

It is important to note that responses to the impact of AI on advisory processes are likely to differ depending on each jurisdiction’s legal and market structures, making it difficult to identify a one-size-fits-all solution. On one hand, considering that major providers such as ISS and Glass Lewis operate across multiple jurisdictions, a case can be made for ensuring a certain degree of international coordination and regulatory coherence. On the other hand, the regulation of proxy advisors has historically evolved in line with national institutional contexts and market practices, and significant variation exists in terms of the scope of regulatory involvement and the techniques employed. These differences must be taken into account.

Accordingly, the institutional design accompanying the introduction of AI must, for the time being, be flexibly constructed in accordance with national contexts.[96] That said, given the growing centrality of AI in the advisory formation process, it is clear that some regulatory measures are needed to ensure accountability in data processing and judgment formation, and to clarify the locus of decision-making and any embedded conflicts of interest. Rather than framing AI governance as a binary question of whether to promote or restrain its use, it is increasingly important for national regulators to squarely address the institutional implications of AI and explore regulatory responses that are tailored to their respective environments.


Corresponding author: Masaki Iwasaki, Seoul National University School of Law, Seoul, The Republic of Korea, E-mail:

Acknowledgements

The author is grateful to seminar participants at Seoul National University, the Law and Technology Workshop, the Corporate Law Workshop, and two anonymous reviewers.

  1. Research funding: This article was funded by the 2022 Research Fund of the Seoul National University Law Research Institute, donated by the Seoul National University Law Foundation.

  2. Conflicts of interest: The author has no conflicts of interest to disclose.

References

Adadi, A., and M. Berrada. 2018. “Peeking Inside the Black Box: A Survey on Explainable Artificial Intelligence (XAI).” IEEE Access 6: 52138–60. https://doi.org/10.1109/ACCESS.2018.2870052.Search in Google Scholar

Aguirre, A., G. Dempsey, H. Surden, and P. B. Reiner. 2020. “AI Loyalty: A New Paradigm for Aligning Stakeholder Interests.” IEEE Transactions on Technology and Society 1 (3): 128–37. https://doi.org/10.1109/TTS.2020.3013490.Search in Google Scholar

Alexander, C. R., M. A. Chen, D. J. Seppi, and C. S. Spatt. 2010. “Interim News and the Role of Proxy Voting Advice.” Review of Financial Studies 23 (12): 4419–54. https://doi.org/10.1093/rfs/hhq111.Search in Google Scholar

Asher, N., S. Bhar, A. Chaturvedi, J. Hunter, and S. Paul. 2023. “Limits for Learning with Language Models.” In Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (SEM 2023), 236–48. Toronto, Canada: Association for Computational Linguistics.10.18653/v1/2023.starsem-1.22Search in Google Scholar

Barzacanos, A. 2024. “SEC Chair Gensler Warns on AI Conflicts of Interest.” Grip. September 10 https://www.grip.globalrelay.com/sec-chair-gensler-warns-on-ai-conflicts-of-interest/.Search in Google Scholar

Benthall, S., and D. Shekman. 2023. “Designing Fiduciary Artificial Intelligence.” In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ‘23). Article No. 10, 1–15. New York, NY, USA: Association for Computing Machinery.10.1145/3617694.3623230Search in Google Scholar

BIICL. 2023. Use of Artificial Intelligence in Legal Practice. https://www.biicl.org/documents/170_use_of_artificial_intelligence_in_legal_practice_final.pdf.Search in Google Scholar

Brav, A., M. Cain, and J. Zytnick. 2022. “Retail Shareholder Participation in the Proxy Process: Monitoring, Engagement, and Voting.” Journal of Financial Economics 144 (2): 492–522. https://doi.org/10.1016/j.jfineco.2021.07.013.Search in Google Scholar

Cabezon, F. 2025. “Executive Compensation: The Trend toward One-Size-Fits-All.” Journal of Accounting and Economics 79 (1): 101708. https://doi.org/10.1016/j.jacceco.2024.101708.Search in Google Scholar

Carbonara, E., F. Parisi, and G. v. Wangenheim. 2008. “Lawmakers as Norm Entrepreneurs.” Review of Law & Economics 4 (3): 779–99. https://doi.org/10.2202/1555-5879.1320.Search in Google Scholar

Carbonara, E., F. Parisi, and G. v. Wangenheim. 2012. “Unjust Laws and Illegal Norms.” International Review of Law and Economics 32 (3): 285–99. https://doi.org/10.1016/j.irle.2012.03.001.Search in Google Scholar

Černevičienė, J., and A. Kabašinskas. 2024. “Explainable Artificial Intelligence (XAI) in Finance: A Systematic Literature Review.” Artificial Intelligence Review 57 (8): 216, https://doi.org/10.1007/s10462-024-10854-8.Search in Google Scholar

Copland, J., D. F. Larcker, and B. Tayan. 2018. “The Big Thumb on the Scale: An Overview of the Proxy Advisory Industry.” Stanford Closer Look Series. https://www.gsb.stanford.edu/faculty-research/publications/big-thumb-scale-overview-proxy-advisory-industry.Search in Google Scholar

Daines, R. M., I. D. Gow, and D. F. Larcker. 2010. “Rating the Ratings: How Good Are Commercial Governance Ratings?” Journal of Financial Economics 98 (3): 439–61. https://doi.org/10.1016/j.jfineco.2010.06.005.Search in Google Scholar

Dey, A., A. Starkweather, and J. T. White. 2024. “Proxy Advisory Firms and Corporate Shareholder Engagement.” Review of Financial Studies 37 (12): 3877–931. https://doi.org/10.1093/rfs/hhae045.Search in Google Scholar

Dimson, E., P. Marsh, and M. Staunton. 2020. “Divergent ESG Ratings.” Journal of Portfolio Management 47 (1): 75–87. https://doi.org/10.3905/jpm.2020.1.175.Search in Google Scholar

Dutt, V., and A. Bharucha. 2023. “Regulation of Proxy Advisors in India.” Lexology. (January 12) https://www.lexology.com/library/detail.aspx?g=2ca71860-41de-4052-8766-0cc93900d21d.Search in Google Scholar

Edmans, A., T. Gosling, and D. Jenter. 2023. “CEO Compensation: Evidence from the Field.” Journal of Financial Economics 150 (3): 103718. https://doi.org/10.1016/j.jfineco.2023.103718.Search in Google Scholar

ESMA. 2013. Final Report: Feedback Statement on the Consultation Regarding the Role of the Proxy Advisory Industry. ESMA/2013/84. https://www.esma.europa.eu/sites/default/files/library/2015/11/2013-84.pdf.Search in Google Scholar

ESMA. 2023. Artificial Intelligence in EU Securities Markets. ESMA50-164-6247. https://www.esma.europa.eu/sites/default/files/library/ESMA50-164-6247-AI_in_securities_markets.pdf.Search in Google Scholar

ESMA. 2024. Final Report: Technical Advice on Revisions to Commission Delegated Regulation (EU) 447/2012 and Annex I of CRA Regulation. ESMA84-2037069784-2196. https://www.esma.europa.eu/document/final-report-technical-advice-revisions-commission-delegated-regulation-eu-4472012-and.Search in Google Scholar

ESMA. 2025. Technical Standards under the Regulation on the Transparency and Integrity of Environmental, Social and Governance (ESG) Rating Activities. ESMA84-2037069784-2276. https://www.esma.europa.eu/sites/default/files/2025-05/ESMA84-2037069784-2276_Consultation_Paper_on_Technical_Standards_under_ESG_Rating_Regulation.pdf.Search in Google Scholar

ESMA and EBA. 2023. Implementation of SRD2 Provisions on Proxy Advisors and the Investment Chain. ESMA32-380-267. https://www.esma.europa.eu/sites/default/files/2023-07/ESMA32-380-267_Report_on_SRD2.pdf.Search in Google Scholar

Financial Conduct Authority and the Bank of England Prudential Regulation Authority. 2014. Financial Incentives for Whistleblowers. https://www.fca.org.uk/publication/financial-incentives-for-whistleblowers.pdf.Search in Google Scholar

Floridi, L., and J. Cowls. 2019. “A Unified Framework of Five Principles for AI in Society.” Harvard Data Science Review 1 (1). https://doi.org/10.1162/99608f92.8cd550d1.Search in Google Scholar

Garoupa, N. 2024. “Why Sentencing Codification Could Be More Complex Than Anticipated.” Asian Journal of Law and Economics 15 (2): 221–9. https://doi.org/10.1515/ajle-2023-0173.Search in Google Scholar

Gillison, D. 2024. “Battle over Proxy Rules, Appeals Court Sides with US SEC.” Reuters. (September 12) https://www.reuters.com/legal/battle-over-proxy-rules-appeals-court-rules-us-secs-favor-2024-09-10/.Search in Google Scholar

Glass Lewis. 2025. ESG Profile Methodology. https://resources.glasslewis.com/hubfs/Issuer-Relations/2025%20ESG%20Profile%20Methodology.pdf.Search in Google Scholar

Grabar, N., and S. Wang. 2021. “The SEC Backs off on Proxy Advisory Firms.” Harvard Law School Forum on Corporate Governance. (December 19) https://corpgov.law.harvard.edu/2021/12/19/the-sec-backs-off-on-proxy-advisory-firms/.Search in Google Scholar

Hayne, C., and M. Vance. 2019. “Information Intermediary or De Facto Standard Setter? Field Evidence on the Indirect and Direct Influence of Proxy Advisors.” Journal of Accounting Research 57 (4): 969–1011. https://doi.org/10.1111/1475-679X.12261.Search in Google Scholar

Hu, E., N. Malenko, and J. Zytnick. 2025. “Custom Proxy Voting Advice.” European Corporate Governance Institute – Finance Working Paper No. 975/2024, Olin Business School Center for Finance & Accounting Research Paper No. 2024/04.10.3386/w32559Search in Google Scholar

Iliev, P., and M. Lowry. 2015. “Are Mutual Funds Active Voters?” Review of Financial Studies 28 (2): 446–85. https://doi.org/10.1093/rfs/hhu062.Search in Google Scholar

ISS. 2023. Code of Ethics. https://www.issgovernance.com/file/duediligence/code-of-ethics-nov-2023.pdf.Search in Google Scholar

ISS. 2025. ESG Corporate Rating: Methodology and Research Process. Version 1.1, May. https://www.issgovernance.com/file/products/iss-esg-corporate-rating-methodology.pdf.Search in Google Scholar

Iwasaki, M. 2018. “Effects of External Whistleblower Rewards on Internal Reporting.” Harvard John M. Olin Fellow’s Discussion Paper Series No. 76. https://doi.org/10.2139/ssrn.3188465.Search in Google Scholar

Iwasaki, M. 2020a. “Relative Impacts of Monetary and Non-monetary Factors on Whistleblowing Intention: The Case of Securities Fraud.” University of Pennsylvania Journal of Business Law 22 (3): 591–626. https://scholarship.law.upenn.edu/jbl/vol22/iss3/3.Search in Google Scholar

Iwasaki, M.. 2020b. “A Model of Corporate Self-Policing and Self-Reporting.” International Review of Law and Economics 63: 105910. https://doi.org/10.1016/j.irle.2020.105910.Search in Google Scholar

Iwasaki, M. 2022. “Segmentation of Social Norms and Emergence of Social Conflicts through COVID-19 Laws.” Asian Journal of Law and Economics 13 (1): 1–36. https://doi.org/10.1515/ajle-2022-0010.Search in Google Scholar

Iwasaki, M. 2023a. “Whistleblowers as Defenders of Human Rights: The Whistleblower Protection Act in Japan.” Business and Human Rights Journal 8 (1): 103–9. https://doi.org/10.1017/bhj.2022.41.Search in Google Scholar

Iwasaki, M. 2023b. “Social Preferences and Well-Being: Theory and Evidence.” Humanities and Social Sciences Communications 10: 342. https://doi.org/10.1057/s41599-023-01782-z.Search in Google Scholar

Iwasaki, M. 2024a. “Reward Whistleblowers Who Expose Environmental Crimes.” Nature Human Behaviour 8: 404–5. https://doi.org/10.1038/s41562-024-01825-8.Search in Google Scholar

Iwasaki, M. 2024b. “Digital Cloning of the Dead: Exploring the Optimal Default Rule.” Asian Journal of Law and Economics 15 (1): 1–29. https://doi.org/10.1515/ajle-2023-0125.Search in Google Scholar

Iwasaki, M. 2024c. “The Power of Empirical Evidence: Assessing Changes in Public Opinion on Constitutional Emergency Provisions.” Public Choice. https://doi.org/10.1007/s11127-024-01252-3.Search in Google Scholar

Iwasaki, M. 2025. “Environmental Governance and Whistleblower Rewards: Balancing Prosocial Motivations with Monetary Incentives.” Law & Social Inquiry 50 (2): 468–503. https://doi.org/10.1017/lsi.2025.13.Search in Google Scholar

Jiang, B., Y. Xie, Z. Hao, X. Wang, T. Mallick, W. J. Su, et al.. 2024. “A Peek into Token Bias: Large Language Models Are Not yet Genuine Reasoners.”In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 4722–56. Miami, Florida, USA: Association for Computational Linguistics.10.18653/v1/2024.emnlp-main.272Search in Google Scholar

Jochem, T., G. Ormazabal, and A. Rajamani. 2021. “Why Have CEO Pay Levels Become Less Diverse?”.European Corporate Governance Institute – Finance Working Paper No. 707/2020.10.2139/ssrn.3716765Search in Google Scholar

Kroll, J. A. 2021. “Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘21), 758–71. New York, NY, USA: Association for Computing Machinery.10.1145/3442188.3445937Search in Google Scholar

Larcker, D. F., and B. Tayan. 2024. “Seven Questions about Proxy Advisors.” Stanford Closer Look Series. https://ssrn.com/abstract=4808820.Search in Google Scholar

Larcker, D. F., A. L. McCall, and G. Ormazabal. 2013a. “Proxy Advisory Firms and Stock Option Repricing.” Journal of Accounting and Economics 56 (2–3): 149–69. https://doi.org/10.1016/j.jacceco.2013.05.003.Search in Google Scholar

Larcker, D. F., A. L. McCall, and B. Tayan. 2013b. “And Then a Miracle Happens! How Do Proxy Advisory Firms Develop Their Voting Recommendations?” Stanford Closer Look Series. https://ssrn.com/abstract=2224329.Search in Google Scholar

Larcker, D. F., A. L. McCall, and G. Ormazabal. 2015. “Outsourcing Shareholder Voting to Proxy Advisory Firms.” Journal of Law and Economics 58 (1): 173–204. https://doi.org/10.1086/682910.Search in Google Scholar

Larcker, D. F., L. Pomorski, B. Tayan, and E. Watts. 2022. “ESG Ratings: A Compass without Direction.” Stanford Closer Look Series. https://ssrn.com/abstract=4179647.Search in Google Scholar

Li, T. 2018. “Outsourcing Corporate Governance: Conflicts of Interest within the Proxy Advisory Industry.” Management Science 64 (6): 2951–71. https://doi.org/10.1287/mnsc.2016.2652.Search in Google Scholar

Ma, S., and Y. Xiong. 2021. “Information Bias in the Proxy Advisory Market.” Review of Corporate Finance Studies 10 (1): 82–135. https://doi.org/10.1093/rcfs/cfaa005.Search in Google Scholar

Malenko, N., and Y. Shen. 2016. “The Role of Proxy Advisory Firms: Evidence from a Regression-Discontinuity Design.” Review of Financial Studies 29 (12): 3394–427. https://doi.org/10.1093/rfs/hhw070.Search in Google Scholar

Malenko, A., N. Malenko, and C. Spatt. 2025. “Creating Controversy in Proxy Voting Advice.” Journal of Finance 80 (4): 2303–54, https://doi.org/10.1111/jofi.13438.Search in Google Scholar

Miceli, T. 2024. “On the Impossibility of a Purely Objective Economic Theory of Crime.” Asian Journal of Law and Economics 15 (2): 209–19. https://doi.org/10.1515/ajle-2023-0135.Search in Google Scholar

Nie, Y., Y. Kong, X. Dong, J. M. Mulvey, H. V. Poor, Q. Wen, et al.. 2024. “A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges.”. arXiv preprint arXiv:2406.11903 https://doi.org/10.48550/arXiv.2406.11903.Search in Google Scholar

Nyreröd, T., and G. Spagnolo. 2021. “Myths and Numbers on Whistleblower Rewards.” Regulation & Governance 15 (1): 82–97. https://doi.org/10.1111/rego.12267.Search in Google Scholar

Prall, K. 2021. “ESG Ratings: Navigating through the Haze.” Enterprising Investor. CFA Institute (August 10) https://blogs.cfainstitute.org/investor/2021/08/10/esg-ratings-navigating-through-the-haze/.Search in Google Scholar

Rose, P. 2021. Proxy Advisors and Market Power: A Review of Institutional Investor Robovoting. Manhattan Institute. https://ssrn.com/abstract=3851233.Search in Google Scholar

Sah, S., and S. Rao. 2025. “Proxy Advisors: Rising amid Shareholder Activism.” Lexology. (February 24) https://www.lexology.com/library/detail.aspx?g=d2b13aa6-7ee0-4231-a72c-d03721ce6e9b.Search in Google Scholar

Shaw, D. E. 2022. Keep the Change: Analyzing the Increase in ESG Ratings for U.S. Equities. https://www.deshaw.com/assets/articles/DESCO_Market_Insights_ESG_Ratings_20220408.pdf.Search in Google Scholar

Song, H. S. 2018. “Improving Proxy Advisory Services in Korea.” In Capital Market Focus. Korea Capital Market Institute. https://www.kcmi.re.kr/flexer/view?fid=22116&fgu=002001&fty=004003.Search in Google Scholar

Tanaka, W., and M. Iwasaki. 2023. “Homogeneity and Heterogeneity in How Institutional Investors Perceive Corporate and Securities Regulations.” European Business Organization Law Review 24 (3): 507–54, https://doi.org/10.1007/s40804-022-00260-4.Search in Google Scholar

Thomas, R., A. Palmiter, and J. Cotter. 2012. “Dodd-Frank’s Say on Pay: Will it Lead to a Greater Role for Shareholders in Corporate Governance?” Cornell Law Review 97 (5): 1213–66, https://scholarship.law.cornell.edu/clr/vol97/iss5/6/.Search in Google Scholar

Thorson, E. 2016. “Belief Echoes: The Persistent Effects of Corrected Misinformation.” Political Communication 33 (3): 460–80. https://doi.org/10.1080/10584609.2015.1102187.Search in Google Scholar

Tuch, A. F. 2019. “Proxy Advisor Influence in a Comparative Light.” Boston University Law Review 99 (3): 1459–507, https://www.bu.edu/bulawreview/files/2019/08/TUCH-final-edits-4.pdf.Search in Google Scholar

Ulen, T. 2024. “The Places We’ll Go.” Asian Journal of Law and Economics 15 (2): 281–302. https://doi.org/10.1515/ajle-2024-0043.Search in Google Scholar

US National Institute of Standards and Technology. 2022. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. NIST Special Publication 1270.Search in Google Scholar

Wieringa, M. 2020. “What to Account for when Accounting for Algorithms: A Systematic Literature Review on Algorithmic Accountability.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ‘20), 1–18. New York, NY: Association for Computing Machinery.10.1145/3351095.3372833Search in Google Scholar

Received: 2025-07-03
Accepted: 2025-07-25
Published Online: 2025-08-14

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 21.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ajle-2025-0069/html
Scroll to top button