Removing Inherent Bias in AI: Addressing this vital issue at Davos

During my week in Davos at the 54th World Economic Forum, I had the privilege of engaging in panoramic and in-depth conversations about AI’s present and future state. I also had the opportunity to meet numerous young, smart entrepreneurs with promising start-ups in the AI industry.

The first day of this year’s forum coincided with Martin Luther King Day, and it began with bright and sunny weather. I returned to the historic Schatzhalp Hotel, perched atop a mountain, to participate in a Diversity and Inclusion panel sponsored by Inkwell, titled “AI and Transformation: The Power and Potential of New Narratives.”

The discussion, held at “Inkwell Beach,” with the majestic Swiss Alps as a backdrop, was led by Christy Tanner, senior advisor to Philips Electronics and Reykjavik Global. She humorously remarked, “If you came to Davos and didn’t want to hear about AI, then you should leave now.” This set the tone for what would be a profound discussion on AI.

A key takeaway from the session was the importance of removing bias from the development of the future world economy and powering the AI revolution that leaves no one behind.

The tech-savvy and marketing-savvy panelists recognized biases inherently built into most large language models and generative AI products currently available for general use. While new opportunities arise, these models have been trained on biased content, creating the challenge of improving these models and eliminating bias from AI products and AI-generated content. I emphasized that more data automatically feeds into these models as AI proliferates across different use cases and regions, compounding existing complex AI solutions.

Improving these models and removing bias is essential to ensure their usability and longevity. It makes one consider which individuals are left behind and how to improve things. Approaches like investing in equitable access to technology, improving data literacy training, and developing ethical AI algorithms that are unbiased and inclusive, with transparent design and accountability mechanisms. The challenge is daunting but necessary to manifest AI democratization and create a sustainable approach to deploying AI.

Larry, another panelist, stressed the importance of the people in the AI space in making this happen. He pointed out the need to widen the aperture and consider diverse AI development and decision-making perspectives.

Michael Treff, another panelist, discussed the three key aspects of AI: the data input, the decisions made by AI, and how those decisions are expressed. He emphasized that bias can exist at any point in this chain and urged business leaders to think holistically about it.

Our discussions centered around making AI more inclusive, improving governance, and ensuring that AI benefits diverse audiences. In leading Blaize, I have a vision for the future in which we democratize AI, making it accessible to all while creating sustainable and energy-efficient solutions, ushering in benefits to all. This approach will help migrate society to a better future that leaves no one behind.

We concluded that removing bias from AI is not just a technological challenge but also a social and ethical one.

Dinakar Munagala is the co-founder and CEO of Blaize, based in El Dorado Hills, CA.