In our latest episode of Leading with Data, we had the privilege of speaking with Ravit Dotan, a renowned expert in AI ethics. Ravit Dotan’s diverse background, including a PhD in philosophy from UC Berkeley and her leadership in AI ethics at Bria.ai, uniquely positions her to offer profound insights into responsible AI practices. Throughout our conversation, Ravit emphasized the importance of integrating responsible AI considerations from the inception of product development. She shared practical strategies for startups, discussed the significance of continuous ethics reviews, and highlighted the critical role of public engagement in refining AI approaches. Her insights provide a roadmap for businesses aiming to navigate the complex landscape of AI responsibility.
You can listen to this episode of Leading with Data on popular platforms like Spotify, Google Podcasts, and Apple. Pick your favorite to enjoy the insightful content!
Let’s look into the details of our conversation with Ravit Dotan!
As the CEO of TechBetter, I’ve pondered deeply about the potential dystopian outcomes of AI. The most troubling scenario for me is the proliferation of disinformation. Imagine a world where we can no longer rely on anything we find online, where even scientific papers are riddled with misinformation generated by AI. This could erode our trust in science and reliable information sources, leaving us in a state of perpetual uncertainty and skepticism.
My journey into responsible AI began during my PhD in philosophy at UC Berkeley, where I specialized in epistemology and philosophy of science. I was intrigued by the inherent values shaping science and noticed parallels in machine learning, which was often touted as value-free and objective. With my background in tech and a desire for positive social impact, I decided to apply the lessons from philosophy to the burgeoning field of AI, aiming to detect and productively use the embedded social and political values.
Responsible AI, to me, is not about the AI itself but the people behind it – those who create, use, buy, invest in, and insure it. It’s about developing and deploying AI with a keen awareness of its social implications, minimizing risks, and maximizing benefits. In a tech company, responsible AI is the outcome of responsible development processes that consider the broader social context.
Startups should think about responsible AI from the very beginning. Delaying this consideration only complicates matters later on. Addressing responsible AI early on allows you to integrate these considerations into your business model, which can be crucial for gaining internal buy-in and ensuring engineers have the resources to tackle responsibility-related tasks.
Startups can begin by identifying common risks using frameworks like the AI RMF from NIST. They should consider how their target audience and company could be harmed by these risks and prioritize accordingly. Engaging in group exercises to discuss these risks can raise awareness and lead to a more responsible approach. It’s also vital to tie in business impact to ensure ongoing commitment to responsible AI practices.
I don’t see it as a trade-off. Addressing responsible AI can actually propel a company forward by allaying consumer and investor concerns. Having a plan for responsible AI can aid in market fit and demonstrate to stakeholders that the company is proactive in mitigating risks.
Companies vary in their approach. Some, like OpenAI, release products and iterate quickly upon identifying shortcomings. Others, like Google, may hold back releases until they are more certain about the model’s behavior. The best practice is to conduct an Ethics review at every stage of feature development to weigh the risks and benefits and decide whether to proceed.
A notable example is Amazon’s scrapped AI recruitment tool. After discovering the system was biased against women, despite not having gender as a feature, Amazon chose to abandon the project. This decision likely saved them from potential lawsuits and reputational damage. It underscores the importance of testing for bias and considering the broader implications of AI systems.
Companies must be adaptable. If a primary metric for measuring bias becomes outdated due to changes in the business model or use case, they need to switch to a more relevant metric. It’s an ongoing journey of improvement, where companies should start with one representative metric, measure, and improve upon it, and then iterate to address broader issues.
While I don’t categorize tools strictly as open source or proprietary in terms of responsible AI, it’s crucial for companies to consider the AI platform they choose. Different platforms may have varying levels of inherent discrimination, so it’s essential to test and take into account the responsibility aspects when selecting the foundation for your technology.
Embrace the change. Just as in other fields, sometimes a shift in metrics is unavoidable. It’s important to start somewhere, even if it’s not perfect, and to view it as an incremental improvement process. Engaging with the public and experts through hackathons or red teaming events can provide valuable insights and help refine the approach to responsible AI.
Our enlightening discussion with Ravit Dotan underscored the vital need for responsible AI practices in today’s rapidly evolving technological landscape. By incorporating ethical considerations from the start, engaging in group exercises to understand AI risks, and adapting to changing metrics, companies can better manage the social implications of their technologies.
Ravit’s perspectives, drawn from her extensive experience and philosophical expertise, stress the importance of continuous ethics reviews and public engagement. As AI continues to shape our future, the insights from leaders like Ravit Dotan are invaluable in guiding companies to develop technologies that are not only innovative but also socially responsible and ethically sound.
For more engaging sessions on AI, data science, and GenAI, stay tuned with us on Leading with Data.