We have come so far
The concept of artificial intelligence is not a recent revelation, but the actual existence of such technology and field of research—founded in the summer of 1956—is one of the most momentous developments in recent sociotechnical history. As new kinds of artificial intelligence actors and subtypes, such as machine learning and robotics, become pervasive and influence the daily lives of millions of people across the world, the relationships and expectations between governments, businesses and societies will become increasingly complex.
This has been true for every major industrial paradigm—connecting the dots between these new technologies, their exponential development and effects on society paints a cautionary tale of the risks, benefits and opportunities that decision makers need to consider and navigate to gain the best of what is happening across the value chain.
The speed of technological innovation has outpaced the rate of which we can institutionalize and evenly allocate the benefits of the inventions. For now, as the creation and adoption of AI still fundamentally requires human input, we must assess the risk to stakeholders and communicate the benefits to businesses and society. In the near future however, we will experience trustworthy AI systems that create other AI systems, and these generative and adaptive systems will have agency of their own, albeit initially designed and modeled by humans.
We are creating the future now and it is up to boards of directors and C-suites to determine what happens next in this story. However, as consumers and members of the workforce, we should all be aware of and understand the ethical implications as well as the risks and benefits of adopting AI. The impact is real; as many organizations continue to digitize and AI reduces certain types of jobs, there are initiatives evaluating workforce, upskilling and reskilling across the labor market.
The data is the story
The economies of scale and computational innovations have created an unstoppable cloud computing infrastructure that continues to drive demand to capitalize on the explosion of data.
Analyzing personal data has allowed owners of AI systems to create predictive models of personal preferences or group behavior. For example, mass data collection and surveillance systems are already being applied in societal contexts. Certain uses of AI and augmented reality have caused ethical conflicts, inspiring employee activism at tech giants like Amazon and Google. These, combined with other economic competitive forces, will continue to recontextualize AI and potential uses of data in unexpected applications, but in ways that are acceptable to society.
Training AI systems: critical decisions
People are at the center of AI, as it interacts and analyzes billions of data sets through probabilistic reasoning, affecting all sorts of decisions and outcomes that impact individuals, families, cities, organizations, markets and even entire countries. Certain AI-generated decisions and determinations—including loan and school application results—can have very personal implications for people. Additionally, actions based on AI-generated predictions—including natural language generation systems that transform statistical inference into natural language conversations, systems that perform predictive maintenance of power grids or water treatment plans or systems that prevent fraud by detecting anomalies on financial transactions—can have real life consequences.
The speed to market has its own risks and benefits as well. We can expect to see more unintended consequences of rushed products, inherited negatives of blackboxes at various scales and their effects enabled by the lack of ethical inferences and interpretability of the prediction inference and decision-making process. For example Amazon scraped an AI recruiting tool after it exhibited bias against women and certain autonomous vehicles failed to detect dark-skinned pedestrians.
The owners of these systems will continue to have immense power over the data they have collected and the actions they take after the data sets are processed by AI. As businesses frequently focus on the immediate benefits, they often neglect foreseeable risks, such as algorithmic discrimination and overfitting a model. Executives should have a clear view of the risks and benefits of any AI system, factoring in related data, algorithmic bias, training, modeling, design, and above all, interpretability—the extent to which a cause and an effect can be observed in a system.
Executives blindly trusting their AI systems are risking unexpected surprises. Without proper “Explainable AI”—applying AI in a manner that the results can be understood by human experts—there is no recognizable value. This is one of the major reasons why AI projects fail. Understanding AI systems provides an overview of trust and helps to inform ethical and moral choices.
The ethical and moral implications of AI are important considerations for executives and should be a part of all operational systems. Building the public’s trust in AI requires putting in perspective the long-term view of potential risks, empowering the workforce to become trainers, explainers and sustainers of these AI systems, and aligning effective business goals with transparency and responsibility to the public.
The accumulated progress of the industry has given rise to the potential Fourth Wave of AI, a prelude to the inevitable realization of General Artificial Intelligence—systems that learn the same way humans do. By understanding stories, these general AI agents will have fully autonomous capabilities and will be able to adapt and learn in uncertain situations and environments.
Organizations that follow best-in-class AI ethical principles will be the leaders in their unique competitive sets, unlocking new value, creating business impact and driving differentiation beyond better cashflows and reduction in cost of labor. Those best-in-class systems will be created by interdisciplinary teams and the combination of yet unimaginable advances in AI and people in various domains of knowledge, including computer and information science, sociology, economics, anthropology and psychology. These collaborative teams do not guarantee the development of the most ethical AI system, but their collective intelligence certainly increases the odds of AI becoming a better and more human partner in our lives.