How to implement the right AI technique for your digital transformation projects? PART 1

PART 1Felipe SanchezBlockedUnblockFollowFollowingJul 81 IntroductionProject Management (PM) produces data that are generated, captured and stored during the project planning, execution and closing processes.

These data provide many details about projects’ goals, actors, processes, outcomes, performances or fails, etc.

Lessons can be learned from this material.

In the best case, these data can be used to identify or verify best practices, to explain past projects’ failures or successes (diagnosis) or to predict their future performances (prognosis).

To model a causal relation between project management process maturity and projects’ operational performances, we have the choice amongst several Artificial Intelligence (AI) and Machine Learning (ML) techniques combining knowledge representation, data analysis and probabilistic inferences (Krizhevsky, Sutskever, & Hinton, 2012) and learning.

AI and ML techniques fit with PM because large organizations have series of projects; data are thus constantly produced and updated, enabling causal hypothesis refutation or verification.

In the specific domain of PM, authors have still established statistical correlations between PM factors (processes implementation, team management, etc.

) and past projects’ performance (Ko & Cheng, 2007; Wong, Lai, & Lam, 2000).

More generally, we assume that AI and ML techniques are valuable solutions for PM; they facilitate the systematic exploitation of projects data to have clearer about the relevance or the strength of causal relations.

One of the main issue is to choose a good AI and ML techniques because this very active domain including various and numerous statistical methods that could achieve automatic decision-making, predictive modeling, data classification, and data clustering.

That explains why this article is focused on a choice of an AI and ML techniques fitted to the specificities of PM2 Review of Artificial Intelligence techniques used in PM literatureOriginally, AI techniques aims to ‘’computerize’’ processes characterizing Human cognition, knowledge, reasoning, etc.

The main challenges of AI are: identify a type of process that can be computerized, then computerize it and verify its relevance or efficiency.

ML research is focused on a specific process, which is learning.

The main challenge is to give minimal knowledge and data to computers to train them.

Moreover, ML requires interactions between Humans, selecting data and verifying machines’ results, and computers, with the idea of giving them a greater autonomy in decision.

Since 1990s the synergy between large data sets, especially labeled data, and the augmentation of computer power using graphics processor units, more powerful technique applications upraised.

Technologies and reasoning logic enabled to achieve several goals, for instance reducing word error rates in speech recognition, processing image recognition (Krizhevsky, Sutskever, & Hinton, 2012), beating a human champion at Go (Silver et al.

, 2016), and translating images into natural language (Karpathy & Fei-Fei, 2017).

In project management, some of the most used AI techniques are: bi-variate correlation and multiple regression tests (Mir & Pinnington, 2014), data mining (Ahiaga-Dagbui & Smith, 2014), artificial neural networks (Al-Tabtabai et al.

, 1997; Ko & Cheng, 2007; Wang & Gibson, 2010; Wang et al.

, 2012), reinforcement learning (Mao, Alizadeh, Menache, & Kandula, 2016; Tesauro, Jong, Das, & Bennani, 2006; Ye & Li, 2018), genetic algorithms and multi-criteria decision making (Baron, Rochet, & Esteve, 2006) and Bayesian Networks (Qazi et al.

, 2016) and even hybridation methods of Bayesian networks and evolutionary algorithms (Pitiot, Coudert, Geneste, & Baron, 2010).

This is why, in order to solve our research problem, we have explored three modeling techniques that can be familiar for PM researchers:· Artificial Neural Networks (aka Deep Learning), because we have now proofs of the accuracy in their results in several domains (Wang & Gibson, 2010),· Atype of ML called the Reinforcement Learning (RL), because it has similarity with our conception of maturity,· Bayesian Networks (BNs), because these dynamic tools combine experts’ knowledge and data, causal reasoning and correlation.

2.

1.

Deep LearningInitially we explore the use of artificial neural networks to predict performance based on project management maturity.

Neural networks are used to extract patterns that are too complex to be perceived by humans because of their remarkable ability to obtain trends from complicated data (Castillo & Melin, 1999).

They have a wide use in business applications (Wong et al.

, 2000), especially to evaluate risk management practice (Kampianakis & Oehmen, 2017).

In this section, we introduce them, and then we explain how this technique could be used in our research work.

Inspired by the human brain, the neurophysiologist Warren McCulloch and the logician Walter Pits proposed a first neural network consisting of connected function nodes.

The network was trained by iteratively modifying the weights of the connections (McCulloch & Pitts, 1943).

Later, also inspired by neuroscience, Rosenblatt (Rosenblatt, 1958) developed the perceptron, a simple function for learning.

It mapped the output of each neuron to one or zero.

It took as input a vector of criteria x, the weight vector w and it evaluates whether their scalar product overcomes a threshold u, that is f(x) = { 1 if wx > u ; otherwise 0}.

In a simple layer neural network, this function was not very useful because binary classification is limited.

However, it was more useful in multi-layer networks, called multiple layers perceptron (MLP) (Rumelhart, Hinton, & Williams, 1986).

Developed in the 1980s, MLP includes back propagation, i.

e.

algorithms assigning the good weights for which the neural networks have lower errors in its learning.

One of the most used of back propagation methods is the Stochastic Gradient Descent (SGD), which minimizes the error rate by using the chain rules of partial derivatives.

SGD propagates all derives, or gradients, starting from the top output and goes to the bottom, then it straightforward computes the respective weight of each link.

Since the implementation of MLP and SGD, there was no relevant progress in solving neural networks until 1997, when another method of back propagation, called Long Short Term Memory LSTM, was proposed by (Hochreiter & Schmidhuber, 1997).

LSTM shortens SGD; it introduces also the concept of recurrent network to learn long-range dependencies.

LSTM learns then faster than SGD and it solved complex artificial long-time-lag tasks.

Neural networks are applied in many sciences and industrial sectors.

We need then to bound the type of neural networks useful for PM.

Moreover, in our particular case, a neural network must have: (1) as input, the criterion characterizing the maturity of project management, and (2) as output projects’ operational performance.

According to the common practice, the use of several layers may be necessary for creating a causal model (see Figure 1).

However, even if neural networks have shown high accuracy, in PM, we cannot access the amount of data needed to build a sufficiently efficient network.

We do not have enough projects under these criteria to train the network.

ANN’s need for data increases exponentially with the amount of input criteria (Figure 2).

Despite their intrinsic limitations, ANNs are still used in some business applications (Wong et al.

, 2000), e.

g.

to evaluate risk management practice (Kampianakis & Oehmen, 2017).

ANNs are very interesting, or even fascinating.

We sum up, in Table 1, their strengths and weaknesses related to PM’s point of view.

The second technique we would like to present is the Reinforcement Learning (RL).

2.

2.

Reinforcement learning (RL)Reinforcement learning algorithms were developed from Markov Decision Processes (MDP).

Which are mathematical models for decision-making in the stochastic situation where each event depends only on the state attained in the previous event (the Markov Property).

In 1957, Bellman (Bellman, 1957) proposed a recursive formula optimizing the sum of all rewards along a MDP.

Solving this equation means finding the optimal policy.

However, Bellman’s equation was not possible to solve analytically because it involves the maximization of a function, which cannot be derivative.

The problem lets RL without any relevant advancement until 1995, when Watkins proposed the Q-learning algorithm (Watkins, 1995).

This algorithm helps to solve the challenge of the exploration-exploitation dilemma: a computer agent spends the optimal time exploring the solutions and getting rewards, not enough to be trapped in a local optimal, but maximizing the general reward.

As an example, in Figure 3, the yellow square agent tries several paths to maximize the long term accumulated rewards and to reach its goal: the green position with +1 reward.

Under RL, this agent does not have direct instructions of which decisions to make or which are the immediate consequences of its decisions.

Nevertheless, each decision will cost -0.

04 points.

The agent completes all steps (from its starting point, to the green square); it has then cumulative rewards in the end of its decision-making process.

Then it would simulate several paths until maximizing accumulated rewards (Sutton, 1988).

Very surprisingly, RL is based on a conception of learning that is closed to those of process maturity.

PMMMs have been designed to classify and rank organizations according to the number and type of best practices implemented (or not).

Similarly, under RL, improvement is based on successful repetitions of something, which is similar to best practices implemented.

RL uses computer agents who learn how to make decisions directly from their interactions with a simulation environment.

RL aims to maximize rewards signal from experience; this is, to maximize a reward utility function (similar to project performance) by creating an optimal policy (similar to project management advices).

Under RL, a computer agent begins by not knowledge about how to deal with the external environment; as it gets more mature, it achieves its tasks in a more efficient manner, as in maturity process perfection scale (Table 2).

Moreover, the PM database can be defined criteria axis.

Figure 4 show a simplification with two criteria.

The idea is the following: while the agent — the system aiming at improving– is moving in each direction.

It would get rewards from the environment.

A project management assessment agent would explore states (that is, it satisfies criteria to move to the next level in each axis) and it will obtain reward points (gain in projects’ operational performance).

RL model will produce a corresponding policy based on the steps of passage through different levels while producing a better performance.

Using RL requires the creation of a simulation including several project management criteria and the definition of “reward” points as the agent fulfills these criteria.

Despite its value, in PM included, RL is not easy to implement.

Many parameters must be defined ex ante.

Unfortunately, in our research inquiry, we do not have enough data to create robust RL scenario.

Table 3 displays the strengths and weaknesses of RL.

The combination of ANNs and RL is the basis of Deep Reinforcement Learning (DRL).

In that case, the computer agent in a state uses a deep neural network to learn a policy.

With this policy, the agent takes an action in the environment and gets rewards from the specific states it reaches.

Rewards feed the neural network and generate a better policy.

This was developed and applied in a paper named “playing Atari with deep reinforcement learning” in which they learn a machine to play Atari games directly from pixels, and after training, the machine produces excellent results (Mnih, Silver, & Riedmiller, 2013).

The authors learn a computer to play Atari games directly from pixels, and after training, the machine produces excellent results.

RL was also used, in 2016, to play Go, which is a very complex game, defeating the world champion (Silver et al.

, 2016).

After this works, RL developed more attention from investors.

The investment has increased and the applications have been growing since then (Huddleston & Brown, 2018).

2.

3 Bayesian networks (BN)BNs are graph-based tools modeling experts’ knowledge and inferences; BN combines then data and knowledge, knowledge state and knowledge updating, correlation and causality.

BNs’ ability to explicitly manage uncertainty makes them suitable to a great amount of applications in a wide range of real-world problems including risk assessment (Fenton & Neil, 2013), bankruptcy prediction (Sun & Shenoy, 2007), product acceptability (Arbelaez Garces, Rakotondranaivo, & Bonjour, 2016), medical diagnosis (Constantinou, Fenton, Marsh, & Radlinski, 2016), construction design process diagnosis (Matthews & Philip, 2012), etc.

We elaborated a first presentation of BNs in this post.

Nevertheless, we can note that BNs fit to problems in which (1) causes must be correlated with a consequence, e.

g.

project management maturity criteria, as input variables, and a specific projects’ operational performance, as output, (2) the amount of data is not huge and they change in times (their level of uncertainty is high), (3) one must combine data and experts’ knowledge We can sum up the strengths and weaknesses of BNs in tableOnce ANN, RL and BNs have been presented, we can then define data-focused requirements of a satisfactory AI and ML technique to elaborate the causality between project management process maturity and projects’ operational performances.

In the next part, we will compare these techniques and select which is the best one when applying AI to project management.

.

. More details

Leave a Reply