What Does Responsible AI Mean? My Remarks at the World Economic Forum

On my third day at the World Economic Forum in Davos, I had the honor of being invited by The Financial Times and Casper Labs to explore the pressing and complex issue of Responsible AI. This topic has become one of our industry’s most challenging and important.

During a fireside chat at The Hub titled “AI: Behind the Tech,” I was interviewed onstage by Larry Adams, the dynamic former head of HBO Max, who is now the CEO of X Stereotype, a minority-owned business at the forefront of AI marketing. Our session provided a unique insight into the ethical considerations of AI’s development, particularly regarding preventing bias in AI search and innovation.

Leading-edge companies like Blaize are constantly addressing this issue. We are committed to building a future where human performance is elevated and amplified by generative AI while ensuring it is done ethically. This involves establishing guardrails for AI systems and defining shared standards for their ethical operations. The dialogue during the session focused less on futuristic scenarios and more on the immediate need for AI company leaders to devise and embrace shared standards for ethical AI operations. Generative AI holds enormous potential to reshape society, and whether this transformation is positive or negative depends on the decisions made by AI leaders in the near term.

Marta Belcher, Chairwoman of the Filecoin Foundation, emphasized the importance of viewing AI as part of a larger puzzle. Organizations should think critically about convergence opportunities that democratize access to data rather than adopting AI for its own sake. A Financial Times/Casper Labs Hub panel explored the private sector’s role in realizing responsible AI standards. It featured Blaize, Microsoft, Google Cloud, Filecoin, and the ETH AI Center representatives. The discussion highlighted the importance of public-private collaboration and the need for transparent governance standards.

Currently, there are too many competing frameworks, such as the proposed AI Bill of Rights in the United States and the EU’s AI Act. Panelists agreed that legislative clarity is essential for the private sector to align with appropriate AI standards that balance innovation with data privacy and security. Filecoin Foundation’s Clara Tsao emphasized that the answer lies in public-private collaboration. Prominent players in the AI industry, including Google, Microsoft, and IBM, should eventually adhere to a clear set of standards yet to be defined.

Joining me on stage for a lively session on “AI: Beyond the Tech” was my friend Larry Adams, whose New York-based company examines large language models for unconscious bias and racism in content. We recognize that unconscious bias is present in all of us, and technology, amplified by AI, can help close the gap and promote equality. Using large language models to examine content for bias and tie it to purchase intent, we empower content creators and marketers to optimize their creative work. We aim to make AI accessible to practitioners in various fields, from smart city planners to automotive engineers. We aim to bring AI out of the data center and onto the edge to maximize its value.

Larry and I agreed that responsible AI means putting people at the center of AI strategies and using technology to scale solutions for content creators. We agreed that AI can create brand equity and drive business growth. AI offers tools and data to make better, faster decisions while addressing the need for diversity and inclusion in AI development. Technology, amplified by AI, can help close the bias gap and promote equality. “Inclusion drives business, and we’re turning that into a business metric to measure,” Larry said.

Blaize, as an AI computing company, is dedicated to democratizing AI by bringing it to the masses. Our core innovation is a novel processor significantly more efficient than the existing GPU approaches. With a code-free software platform built on top of it, our products make AI accessible to practitioners in various fields, from smart city planners to automotive engineers. We aim to bring AI out of the data center and onto the edge to maximize its value. Larry and I agreed that responsible AI means putting people at the center of these AI strategies and using technology to scale solutions for content creators. We agreed that AI can close gaps, create brand equity, and drive business growth. It offers tools and data to make better, faster decisions while addressing the need for diversity and inclusion in AI development.

But it’s principally about the people behind the technology and leveraging that technology to bring scale solutions for content creators. “There is a large demand for understanding diverse audiences and creating content that maximizes engagement,” Larry said. “But how do you get there? We’re not able to hire fast enough to get there, and education is not getting us there fast enough, so this is a chance for us to create new signals and implement that into an AI ecosystem to put on people’s laptops and PCs, tools in which we can actually make change.”

We concluded that the only way businesses like ours will grow is not because change is the right thing to do but because it’s the correct business decision. “The only way we can grow is to optimize and grow the audience,” Larry said. “The only way you can attract new audiences is by putting inclusion at the center of your business strategy.” AI can close the gap on diversity and create brand equity, making it a business driver and thus an easier decision. Having tools and data at your fingertips can help consumers and companies make better, faster decisions.

At Blaize, we see the biggest problem with widespread AI adoption as the lack of affordable, smaller solutions, or what I term the “full-stack” problem. We aim to bring AI to the masses or the edge with smaller, more affordable processors. We look at the world, and AI has probably touched one percent of it. Google and Meta have brought value to deep data centers, but if one looks at the small to medium business market, it represents about 90 percent of the world’s economy. And if you take data as a measure of AI, there are petabytes of data in the cloud, out on the edge. We have built smaller, more business-friendly processors and software to enable this evolution.

The world needs more data scientists, so we at Blaize are trying to bridge the gap. Our Blaize® AI Studio® software platform bridges that gap, helping create equality and sustainable AI. Our full-stack AI edge-enabled solutions, with easy-to-use low-code AI software, are helping power the AI Revolution and fulfilling the needs of smart cities and smart AI solutions with mobility for retailers, small businesses, schools, and hospitals. At Blaize, we bring about equality and sustainable AI with our low-power Graph Streaming Processor (GSP®) and easy-to-use AI Studio, in helping bring real technology to the masses.

As the AI model has gotten better with larger scale deployments, AI has become more inclusive, and by putting people at the center of sound AI strategies is the way to avoid, for instance, social media problems with AI, such as when social media became too big, highly ungoverned, highly unregulated, and spun out of control globally. We agreed that having better insights, better governance, and a better handle on the actual outputs through measurement will give us a better experience with AI, with large-scale deployments of this technology. It is only then that the models become more inclusive. With unconscious bias and the feedback loop removed, AI products only get better.

There is a benefit for companies who get it right.

Dinakar Munagala is the co-founder and CEO of Blaize, based in El Dorado Hills, CA.