Executive activities to align on responsible use of AI and AI governance remain a work in progress.
Only 17% of the executives surveyed said their companies have documented a position on responsible AI. Another 50% say their leadership team peers have a position on responsible AI, but it’s not yet fully documented.
The numbers may reflect that regulations are still evolving or the sheer complexity of AI governance. Organizations that look to deploy generative AI at scale will be held accountable for ethical use standards, explainability, bias detection methods and other responsible use considerations. For many, this is uncharted territory.
Organizational AI steering committees may ultimately inherit tasks related to arbitrating responsible use. Our survey findings show definite movement toward appointing steering committees for AI projects (40%) and data governance councils (42%) to inform generative AI readiness.
Respondents whose companies have documented a position on responsible AI were significantly more likely to have an AI steering committee and a data governance council. The latter's importance cannot be overstated, given that AI—generative or otherwise—is only as good as the data that powers it.
Due to the critical, complex and nuanced nature of the work, we also anticipate a new class of AI governance professionals will emerge to help their companies orchestrate large language model ops (LLMOps), data privacy, risk remediation, governance and other concerns of generative AI at scale. They will help their organizations pioneer the art of the possible with the technology.