AI in management decision-making

The frontier between humans and computers in management is moving from operational to strategic. A good synthesis is provided by Jarrahi (2018). This forms part of a wider discussion on the encroachment of AI on professions such as law, where the focus is partly on the ability of AI to help humans make sense of high volumes of information, which themselves are growing (so-called “big-data”). A particularly relevant article, based on empirical research, is that of Kolbjørnsrud et al. (2016), which focuses on the use of AI in redefining management. In management decision-making, there is usually a trade-off between efficiency and fairness (equity). For example, see the work of Perris and Labib (2004) on prioritisation of patients for organ transplant waiting list using fuzzy logic. Another application of AI in management decision-making is

classification and incorporation of various stakeholders’ views using AI technique of fuzzy logic (Poplawska et al., 2015), and group decision-making using a machine learning method (Chakhar et al., 2016).

Claudé and Comb (2018) identify that today, AI is seen primarily as a support to major business decisions rather than a decision-maker, but attribute this to the fact that AI as currently constituted is relatively weak, compared to what will be the strong AI of the future. As computational capacity and speed increase, and as data sets available to support decisions grow, the frontier of substitutability of AI for human decision-making shifts. Shrestha et al. (2019) suggest several possibilities, as follows:

· Full human to AI delegation e.g. recommender systems, digital advertising, online fraud detection, dynamic pricing.

· Hybrid 1: AI to human sequential decision-making e.g. idea evaluation, hiring.

· Hybrid 2: Human to AI sequential decision-making e.g. sports analytics, health monitoring.

· Aggregated human-AI decision-making e.g. top management teams, boards.

It is this latter alternative which reflects most closely the topic of this paper.

Shrestha et al. (2019) also suggest that the appropriateness of these alternatives, and in particular the likelihood of appropriateness of full delegation to AI, depends upon:

· Decision search space specificity – the more specific the required decisions, the more suitable AI is.

· Interpretability – how easy it is to understand the reasons for decisions/recommendations (this relates to whether the AI approach used is “black box” or can “explain” its decisions).

· Size of the alternative set – the larger the size, the greater the problems humans have in dealing with

· Decision-making speed required – the faster it is, the more suitable AI is.

· Replicability – the higher the commonality of data/decisions etc., the more suitable AI is, given that AI depends partly from learning from other cases.

In strategic decisions, the time it can take to see if a particular approach works; the lack of specificity; possible diversity of interpretations; the fact that speed is less of the essence and relative lack of replicability; means that aggregated human-AI decision-making may be more appropriate, although Hybrid 1 may be used too.

In terms of evidence on implementation of AI in strategy, this seems to be secret, though there is evidence of a battle between the two main relevant digital players, Amazon and Google, to extend AI use, using the enormous data sets available to both companies (Condon, 2019; Kiron and Schrage, 2019).

 
Template Design © VibeThemes. All rights reserved.