When a firm talks about AI, is it substance, hype, or risk signalling?
Yang Cao, Miao Liu, Jiaping Qiu, and Ran Zhao study the information content of voluntary AI disclosure in their paper « Information in Disclosing Emerging Technologies: Evidence from AI Disclosure ».
They ask whether AI disclosure from 2010 to 2023 reflects AI engagement, predicts future performance, and informs how capital markets price AI-related risk using keyword screening and a ChatGPT-based classification.
Their main conclusions include:
This research suggests combining AI disclosure with AI risk assessments can improve forward-looking underwriting and engagement priorities for investors.
Ethical, regulatory, and cybersecurity language links AI adoption to governance and social concerns that can become priced tail risk.
For issuers, the paper’s evidence implies that specificity about use cases and risk controls is more decision-relevant than generic AI statements. Risk disclosure is key when AI is material.
The paper explicitly flags some limitations, like the use of keyword-based identification that can miss implicit context and can misread the same term across settings, and the voluntary and strategic dimension of disclosure that is not explicitely captured in the study.