As countries and companies rush to develop new artificial intelligence (AI) technologies and usher in an AI-integrated society, their leaders must also take the time to recognize the importance of rules and guidelines for ethical AI development. While such discussions about the relationship between AI, society and values have revolved mainly around the potential for “technological singularity”—a hypothetical point in time in the future at which technological advancements become uncontrollable, resulting in irreversible changes to human civilization—the most recent G20 Ministerial Meeting on Trade and Digital Economy shifted focus toward establishing broader principles for a “human-centric AI society.”
The G20 ministerial statement recognized that AI, like other emerging technologies, “may present societal challenges, including the transitions in the labor market, privacy, security, ethical issues, new digital divides and the need for AI capacity building.” As this year’s G20 host, Japanese Prime Minister Shinzo Abe emphasized the urgency of addressing potential risks today, as AI becomes more integrated in society. Early experience demonstrates the potential negative consequences of postponing such considerations and debates.
In 2015, Amazon sought to accelerate and improve the hiring process to select top talent for software developer jobs and other technical posts by developing a recruiting engine that utilized a machine learning system that observed patterns in resumes submitted to the company over a 10-year period. Due to the high number of male hires during the 10-year period, this learning process led the AI engine to teach itself that male candidates are preferable, and automatically penalized female candidates. As a result, while Amazon intended to use the AI engine to make its hiring process more efficient, the system inadvertently revealed the company’s potentially discriminatory hiring practices.
In 2015, a study conducted by the University of Washington and the University of Maryland found that in a variety of occupations, women were significantly underrepresented in Google image search results. Researchers looked at the top 100 image search results for 45 different jobs, and found that compared to the gender breakdown of actual gender data from the Bureau of Labor Statistics for each job, Google image result breakdowns resulted in significantly lower female representation than in reality. The study also highlighted the dangers of gender bias present in information environments on searchers’ worldviews, as participants reported a slight change in their perceptions of how male-dominated a field was after being shown skewed image results for a certain occupation. While AI-powered search helps deliver the most relevant results to users from multiple content sources across the Web, Google’s developers have to also consider the potential amplification of negative social trends or the risk that public perception may be shaped or skewed through the use of AI.
These early cases highlight the importance of developing principles for “human-centric AI” that are grounded in common shared values, including the promotion of inclusive economic growth, sustainable development and well-being. By embracing proper ethical standards of fairness, transparency, and explainability—applying AI in a manner that the results and the decision-making process can be understood by humans—both governments and companies can better guide and evaluate the integration of AI into society while also protecting fundamental human rights and values.
Realizing the Full Potential of AI
For companies, failure to adhere to such principles of “human-centric AI” imposes risk to corporate values, societal missions and brand reputations. In the cases of Amazon and Google, the intentions underlying the company’s AI integration were sound, but without consideration of guiding principles. Their early adoption of AI also revealed blind spots in its development processes and exposed them to accusations of gender bias and discrimination. A careful consideration of three dimensions of APCO Agility Indicator—active leadership, shared advocacy and enterprising culture—could have helped Amazon and Google to mitigate these risks and realize the full potential of adopting AI to achieve its goals. The following points are key.
- Taking active leadership requires seeking risk intelligence, leaning into tech advancements, applying predictive analysis, listening actively and operating with flexibility. An active leader knows how their decisions impact not only their businesses, but also their customers, employees and partners—a very human-centric and inclusive viewpoint that considers more than just the financial bottom line.
- Promoting shared advocacy requires investing in society, advocating for others and actively engaging stakeholders. This can help companies be more attuned to social issues and the market that they are operating in.
- Fostering an enterprising culture creates a forward-looking business environment. Curious leaders reward smart risk-taking and reinvention, encouraging employees to learn from mistakes, adapt and push boundaries to capitalize on opportunities.
The growing importance of AI is undeniable. In the case of Japan, the development and adoption of AI is one of the core agendas under the “Society 5.0” initiative—an effort to promote a human-centered society that achieves a high integration of cyberspace (virtual space) and physical space (real space). The Japanese government also announced its national AI strategy for 2019, which establishes education reform to break down the talent divide between science and humanities, aiming to reinforce all individual’s AI literacy, regardless of educational background. Amid this transition towards ubiquitous application of cutting-edge technology, “human-centric”-ness remains the baseline for companies to unlock new business values and solutions.