Play it on your favorite podcast platform
Episode notes
Listen as our featured experts explore the challenges and opportunities of responsible AI. The discussion covers the role of data and how to create frameworks for a cohesive strategy for generative AI across stakeholder groups.
Our experts suggest how organizations can approach implementation of responsible AI practices and ethical principles and policies, including broader stakeholder representation.
Featured experts
- Rachel Higham, CIO, WPP
- Elizabeth Adams, CEO, EMA Advisory Services
What you will hear
“Fundamentally, you can't achieve responsible AI without a responsible approach to data. I think several concerns arise. Data – because its historical – often reflects historical biases and social inequalities. Collecting and analyzing personal data can infringe on privacy rights, if it's not handled well. We must handle data transparently. We must obtain informed consent and we must protect sensitive information. Think about transparency and explainability AI models can be incredibly complex, making it challenging to understand and be transparent about their decision-making processes.”– Rachel
“… For organizations where their employees can clearly speak about the responsible AI definitions, the vision for the organization, and where they fit in that those organizations do very well… those organizations where employees feel that responsible AI is an integral experience - absolutely - there are training courses, there's a AI Center of Excellence, a hub where they can contribute, and they can share and learn the things that they are participating in and exchange information. But what I'm happy about is to at least see the conversations happening in at the executive level, where they understand that a vision is absolutely essential to drive your organization forward in shaping its future in responsible AI."
– Elizabeth
“Now we're starting to think about how might we design automated, and structured and transparent ways of spotting bias, which will be far better than what we perceive or pretend that humans could have achieved in the past. So I think we can reframe it as a positive outcome, because we'll be proactively looking for bias, we'll be building tools and processes and practices and skills roles to do that, specifically, whereas before, we never have.”
– Rachel