AI cybersecurity and digital ethics Valuation and portfolio optimisation

Information in Disclosing Emerging Technologies, Evidence from AI Disclosure

When a firm talks about AI, is it substance, hype, or risk signalling?

Yang Cao, Miao Liu, Jiaping Qiu, and Ran Zhao study the information content of voluntary AI disclosure in their paper « Information in Disclosing Emerging Technologies: Evidence from AI Disclosure ».

They ask whether AI disclosure from 2010 to 2023 reflects AI engagement, predicts future performance, and informs how capital markets price AI-related risk using keyword screening and a ChatGPT-based classification.

Their main conclusions include:

  • Voluntary AI disclosure rises steeply, from 2.36% of US public firms in 2010 to 20.02% in 2023, with disclosures concentrated in business description, risk factors, and MD&A.
  • A 1% rise in AI-skilled employee share corresponds to a 0.6% higher probability of AI disclosure, consistent with disclosure reflecting substantive engagement rather than cosmetic signalling.
  • Disclosing firms show a 13.9% increase in employee population and a 16% increase in sales in the following year, while the AI employee share alone does not show the same employee-growth effect.
  • AI-disclosing firms have 6.1% higher next-year capital expenditures and 15.3% higher R&D spending relative to the sample mean.
  • AI disclosers experience a 23.04% decrease in COGS/Sales, a 14.28% decrease in COGS/Employees, and a 13.36% drop in operating expenses per employee in the following year.
  • AI risk disclosure as a risk factor predicts higher firm tail risk using stock and option-implied volatility measures, especially when framed as regulatory, competitive, or ethical risks.

This research suggests combining AI disclosure with AI risk assessments can improve forward-looking underwriting and engagement priorities for investors.

Ethical, regulatory, and cybersecurity language links AI adoption to governance and social concerns that can become priced tail risk.

For issuers, the paper’s evidence implies that specificity about use cases and risk controls is more decision-relevant than generic AI statements. Risk disclosure is key when AI is material.

The paper explicitly flags some limitations, like the use of keyword-based identification that can miss implicit context and can misread the same term across settings, and the voluntary and strategic dimension of disclosure that is not explicitely captured in the study.