Skip to main content Skip to footer

Disruptions

An Economist’s View of the Digital World

Dr. Martin Prause, an economist and academician from WHU – Otto Beisheim School of Management, Germany, answers questions on how digital technologies will change the way we learn, interact, and work in the context of their impact on society and the economy at large.

‘Sociability’ of Robots

Infosys: The future workplace will consist of humans and Artificial Intelligence (AI) agents/robots. Do you think ‘sociable’ robots, instilled with human behavioral characteristics influenced by cognitive biases, would allow for better human-robot interaction or will that defeat the very purpose of building AI agents – not have these human predispositions?

Dr Martin: The answer depends on the context one is interested in, whether one favours the perspective of mimicking human thinking or creating a new way of reasoning, having physical appearance or none at all.

The emergence of intelligent systems that imbibe human behaviour and serve as human substitutes, is as prominent as building fully rational intelligent agents. Take the famous example of Google’s Assistant making a call to a hairdresser using mumbling sounds to act more natural. Going one step further, Berkeley Dietvorst et al. (2016) highlights that people are likely to accept decisions from slightly imperfect algorithms if they feel they are in charge. Instead of providing a single optimal solution, intelligent systems should give a range of approximate optimal solutions to choose from, which would lead to better human-robot interaction.

I doubt that instilling cognitive biases will lead to “sociable robots”. A famous example of cognitive bias is group think, an error that occurs in teams when the desire for harmony outweighs rational decision making. Why should algorithms deliberately replicate these flaws to be perceived as more human or natural? In my opinion, other techniques, as illustrated by Google and Dietvorst, are more promising to advance the sociability of intelligent systems.

Maximizing the Value of Digital

Infosys: What are the questions that business leaders and government bodies should consider to maximize the value of digital initiatives for the economy and society at large?

Dr Martin: Ethical questions are the most important. If people are not aligned with the advancements in technology, especially digital initiatives, their economic diffusion will be hindered. How are decisions in AI-driven systems made, based on what information, how is personal data processed, stored, secured, and who owns the data? These questions are especially important in the light of the European General Data Protection Regulation.

Secondly, educational questions concerning the right use of digital media should be addressed in order to gain more assertiveness over technology rather than just being passive consumers. Then there are lots of technological questions regarding standards, accessibility, and costs. Computer Integrated Manufacturing, an early application of intelligent autonomous systems in the early 90s, failed due to lack of understanding and skilled labor, resistance to change, and organizational incompatibility. (McGaughey und Snyder 1994, p. 249).

Defining Digital Ethics

Infosys: How do companies who have successfully adopted digital technologies reconsider their corporate ethics and code of conduct to include the emerging discipline of digital ethics? Do you think it is time for an international body of ethical standards for the digital environment?

Dr Martin: Big technology companies, such as Amazon, Apple, Facebook, Google, IBM, and Microsoft are already addressing this problem with joint initiatives (www.partnershiponai.org) to share best practices on AI and consider the implications of AI on society. Similar ethical standards exist in other sciences, such as bio‐medicine or social sciences. In Germany, for example, an ethics commission, the “Deutsche Ethikrat”, explores the impact of Industry 4.0 on the workforce (Arbeitswelt 4.0) or the consequences of big data in a medical context.

I think, an international body of ethical standards for the digital environment may fail for two reasons: specificity and culture. “Digital environment” is an imminent part of developed countries. This would imply setting international standards, for different societies. I would expect resistance because of cultural differences. Cultures view and approach digitalization very differently. For example, the data privacy laws in the U.S., China, and Europe are completely different, leading to different business models. Thus, even if all viewpoints were to be fairly incorporated, such standards would require a level of abstraction, rendering them unenforceable.

Invest in Skills to Stay Relevant

Infosys: There are extremely divergent views about the impact of AI on humankind in general and on the workforce in particular. While organizations are focused on reskilling and upskilling, what else should industries do to ensure that their workforce remains relevant and gainfully employed?

Dr Martin: If we are talking about AI, we should be more specific: Weak AI or Strong AI? If we talk about Weak AI, meaning special purpose tasks automated by machines, we are talking about an impact within the next ten years. If we are talking about Strong AI, meaning machines that possess creative capabilities to handle unfamiliar situations and accomplish tasks better and cheaper than humans could, the time horizon stretches from 10 to 45 years.

Scholars at recent AI conferences said there was a 50% chance that strong AI would outperform humans in various aspects of work within the next 45 years. (Grace et al., 2017). The reality looks promising.

AI is currently dominated by the recent advances in Machine Learning (ML). ML is an automated approach for pattern recognition and prediction, based on large training sets. Recent results in computer vision (self‐driving cars), game playing (IBM ‐ Chess, DeepMind – Game of Go, Facebook ‐ Dota 2), Natural Language Processing (Personal Assistants) and knowledge comprehension (Alibaba – Oxford reading test) are at the forefront of scientific breakthroughs in weak AI.

The next big step will be to combine ML with machine reasoning (Parkes and Wellman, 2015). As of now, ML and AI have not paid off. We observe a “production paradox”, that is, the new advances in AI do not achieve productivity gains as expected (Holtel, 2016). Whether this is due to false hopes, mismeasurement, restructuring lags or concentrated distribution, or rent dissipation, as argued by Brynjolfsson et al. (2017), is being debated. Therefore, making a prediction is challenging. Investments in AI seem to be currently more strategic in nature rather than for creating short‐term advantages.

Following Brynjolfsson and Mitchell (2017) multiple economic factors will be affected by AI in general and ML in particular. Specifically, ML will replace human labor in repetitive tasks where statistics reveal patterns in data (Ng, 2016). While, they won’t replace jobs easily that consist of multiple interrelated or unstructured tasks, non‐innovative jobs, for instance in retail, in transportation, in finance and customer management are at stake (Stiglitz, 2014). To maintain a relevant workforce, organizations should invest in managerial and leadership skills to compete in the VUCA environment, which AI is an uncertain part of. Taking AI, as of now, as a strategic investment and building competencies in this direction could be a viable move.

Creating New Learning Grounds

Infosys: Could you elaborate on how business simulation and experiential learning can provide value to enterprises in creating the learning ground that enhance strategic thinking, improve learning and accelerate innovation in a digital environment?

Dr Martin: Data is increasingly being created outside the firm boundaries from IoT devices to social networks and is typically unstructured, incomplete, and fast changing. These characteristics of data pose operational and managerial challenges to a firm. On the operational side, an enhanced approach is needed to deal with big data. On the managerial side, decision-making processes and management roles must be redefined. Decisions are data-driven which might be difficult in some firm cultures, where structures of decision-making are well established. Domain experts change roles from providing answers to understanding the data and asking the right questions.

Business analytic approaches, which split into descriptive, predictive, and prescriptive stages, tackle those challenges. While descriptive and predictive analytics make extensive use of data mining and machine learning techniques to turn data into knowledge, the prescriptive stage focuses more on simulations to ask the right questions concerning business value and competitive advantage. Simulations are a risk-free tool to engage executives in scenario planning, strategy testing and competition analysis. Robin Bell and Mark Loon (2015) highlight in their study that business simulations help executives in strategic thinking and to connect the dots across all business aspects. It fosters critical thinking by bridging the gap between theory and practice. It enables a data-driven firm culture, which is the basis to reap the benefits of digitalization. Business simulations ease the use of scenario analysis to ask out-of-the-box questions and consider different business perspectives to avoid cognitive biases in strategic decision-making.