Artificial intelligence (AI) is a business opportunity on a scale not seen before. According to McKinsey, it has the potential to deliver additional global economic activity of around $13 trillion by 2030. For Katy Milner, a partner specialising in telecommunications at the global law firm Hogan Lovells, it’s an exciting time for her clients.
“We’re already seeing companies use AI in interesting ways,” she says. “For example, there’s a European carrier that’s been using AI to autonomously scan its network, identify where anomalies are occurring, and address them by allocating resources to fix the issues. Another great example is in cybersecurity, where a Belgian company used an AI-powered tool to scan for fraud. In a three-month period, the AI-powered tool detected fraud 185% more often than the non-AI tool. These are just some of the ways our connected industry is seizing the opportunity of AI.”
Tilo Bonow, founder and CEO of Piabo Communications, shares the excitement. His agency works with companies and venture capital funds around the world, helping them communicate better through marketing, public relations, social media content, and influencer relations. This is particularly important in the context of AI, so that people don’t feel threatened but instead see the technology as something that can truly enrich and improve their lives.
“Whether we are building AI, selling AI, or inventing new technology, it all comes down to trust,” he says. “Trust isn’t only a confident relationship with the unknown but also something that can help us take society along with us. Especially in terms of AI, there are fears. Will I lose my job? What will happen with my family? Will it be harmful? Where does the data come from? There are many questions, and I believe that good communication is fundamentally important, not just in the B2B world to sell a product but also within society.”
AI reflects society’s views
Of course, AI isn’t all good news. When Koliwe Majama took up her position as senior program officer at the Mozilla Foundation, she was curious about the implications of AI. Now, as she’s worked with senior fellows, most of whom are academics engaged in research, it’s become apparent that AI acts as a mirror to human behaviour.
“Acknowledging that AI is developed by humans is crucial, and as humans, we influence how AI functions by continuously feeding it information,” she says. “Initially, there was an expectation that as we input data into these technologies, some form of unbiased output would emerge, but this hasn’t been the case. Our continuous use, in terms of design and development, shows that AI is just as unfair as we are because the decisions or learnings it adopts are based on the trends, beliefs, and things we emphasise offline as a society.”
Majama likes to think of AI as “our opinions embedded in code”. Indeed, studies conducted with senior fellows at the Mozilla Foundation indicate that the most pressing issue right now is the perpetuation of existing injustices. For instance, she worked with a senior fellow named Apryl Williams, who demonstrated the bias in dating applications such as Bumble, Tinder, and Hinge. Earlier this year she launched a report (ironically on Valentine’s Day) titled Not My Type: Automating Sexual Racism in Online Dating.
“It showed a continuous replication of selections and modelling based on race,” Majama says. “This is because the machines have recognised certain trends in terms of race, ethnicities, and skin colours.” She also shared the story of how she met her husband, who is white, but they matched only after she dyed her hair blonde, suggesting that it was really her skin colour that delayed their match.
What concerns Majama is the fact that institutions are now using AI systems to facilitate public utilities, such as immigration and food aid. She relates how a cognitive scientist who sits on the UN’s AI panel demonstrated that some of these systems still discriminate, especially in dealing with refugees. This is in line with the ongoing problems that have been an issue for facial detection algorithms, which continue to work less well for people who aren’t white or male.
“We at the Mozilla Foundation are also continuously exploring the extent to which AI systems, particularly in public settings, will continue to discriminate,” Majama says. “We recently issued a grant to a South African who is exploring the social security system. It’s been hailed as a digital makeover of social security but it has disadvantaged over 75% of applicants as well… It’s just been a continuation of my realisation of how the rights that apply to us offline are replicated online.”
We must bring diverse perspectives to the table
How do we resolve this? For as long as she’s worked in the field of technology governance, Majama has heard talk of ‘multi-stakeholderism’ – the utopian idea of bringing everyone from government and industry to the table. But she’s found it hard in practice.
“Depending on the governance of a specific country, whether it be authoritarian or democratic, there is sometimes an adversarial relationship between governments and civil society,” she says. “And then when you look at policymakers, specifically, almost every single country has a portfolio committee in parliament that debates the respective legislative development. So it’s been difficult to get consensus.”
Her suggestion is to make these multi-stakeholder convenings more practical, especially when there is a call for interventions or submissions on a specific law. Rather than just a simple tick-box exercise, there should be evidence in follow-ups. That way, ordinary users won’t be left behind.
“Of course, there’s a lot going on in the legal and regulatory space related to AI,” Milner adds. “And taking a risk-based approach is useful because AI can do so many things; it’s hard to put it all in one bucket… There is an opportunity for public engagement in this process, and I would encourage companies to be watching these proceedings and thinking about what they can contribute. Consider what will work to allow innovation to prosper while still having those protections we want for the highest risk cases.”
How to launch an ethical AI product
Before you launch an AI product, take some advice from Katy Milner:
- Identify specific risks
Begin by pinpointing the specific risks associated with your AI application, focusing on potential human and civil rights issues. Ensure that your AI systems do not perpetuate existing societal injustices and align with global principles such as equity, non-discrimination, and privacy rights. - Implement robust security measures
Secure the AI system by protecting the data it processes with strong cybersecurity safeguards. This includes securing critical infrastructure from potential threats. - Establish transparency
Develop trust with users by being transparent about AI interactions. Clearly communicate whether content, images, or customer service is generated by AI. This openness will help in building a trustworthy relationship with the public. - Develop internal AI governance
Before initiating any AI-related projects, establish a set of internal principles. This framework should guide the ethical use of AI throughout your organisation and ensure consistent practices. - Assess data sources and bias
Evaluate the sources of data your AI will use to ensure they are reliable and free from inherent biases. Consider the potential for human bias to affect the AI’s decisions and work actively to minimise these risks. - Involve equity experts
Include equity experts in the AI development process to perform equity assessments. These specialists can provide valuable insights that might be overlooked by your team, particularly regarding the impact on underrepresented groups. - Conduct regular testing and documentation
Implement a rigorous testing phase to validate the AI system’s functionality and fairness. Document all processes and results to enhance transparency and accountability. - Perform regular audits and ‘red teaming’
After deploying the AI system, schedule regular audits to ensure continued compliance with ethical standards. Use ‘red-teaming’ strategies (such as simulating cyber-attacks from inside the firm) to challenge the system’s robustness and identify vulnerabilities that need to be addressed.
How to market your AI product
To communicate your message effectively, here’s what Tilo Bonow suggests:
- Define and communicate company values
Clarify what your company stands for and ensure these values are reflected in every aspect of your AI product. Be transparent about the technology, the data sources, and the algorithms you use. This fosters trust and relates directly to how your users perceive the integrity of your product. - Establish a two-way communication channel
Engage with your audience through open dialogues, not just top-down communication. This includes using platforms where feedback can be given and genuinely considered, which helps in building a community around your product. - Detail usage guidelines clearly
Explain how users should interact with your AI tools and what the terms of use are. Include both legal and ethical guidelines to ensure users understand their rights and responsibilities. - Integrate AI into your corporate culture
Ensure that your company’s culture embraces AI ethically. The idea that culture eats strategy for breakfast implies that without a supportive culture, even the best strategies can fail. Make ethical AI use a core part of your business philosophy. - Develop strong developer relations
Offer APIs (application programming interfaces, or software protocols) and tools for developers to engage with your AI products. Support them with resources and clear documentation to encourage innovation and proper use of your technology. - Communicate the impact on stakeholders
Address the concerns of all stakeholders, including how AI may affect jobs, data privacy, and corporate responsibility. Be honest about the implications of AI integration in your business operations. - Maintain consistent messaging across all platforms
Ensure that your company’s statements about AI are consistent across all public-facing platforms, from the CEO’s speeches to marketing materials and beyond. Inconsistency can undermine trust and confidence in your brand. - Simplify complex information
Break down complex AI concepts into simple, relatable stories that resonate emotionally with your audience. As demonstrated by Steve Jobs with the iPod (“a thousand songs in your pocket”), focusing on concrete benefits rather than technical specifications can make a more memorable impact.