10/10/2017

Clear and Simple Algorithms for People Management


Admit it! Your actions have consequences. And the results of your interventions are not always positive. I argue that you will need to prioritize simple and transparent algorithms to manage people well.

In this post:

1) It is important to understand the actual causes of a problem
2) How can we anticipate the consequences of the actions that you plan to take in HR
3) Clearly explain the reasons for an intervention
4) There are black box algorithms (like neural networks) and glass box algorithms (also called "white box"), opposites obviously. These are the heroes of this post.

Responsible decision-making

What everyone knows: Since organizations want to manage people, their most valuable asset, as best as possible, they demand that decision-making geared towards optimization be supported by data.
But, like it or not, actions taken to manage people better are going to have consequences beyond what you want to improve.
Here are some examples:
1.   In a market that needs talent more and more, many companies are trying to figure out how to make employees loyal. And for them, the first step is figuring out how they feel about the company. Why don't you take a look at their e-mail to get to know them better?
Hint: You could lose your employees' trust, something definitely worse than what you would gain from that ethically dubious information. You would unleash negative effects in motivation and even negatively affect performance, turnover, and absenteeism.
2.   Imagine that you're in charge of improving the turnover rate. A classification algorithm shows you who is most at risk of leaving the company. Among those people, you choose the ones with the best performance levels. Do you offer them a raise in salary or some other loyalty incentive, without giving them, the "lucky ones," or their colleagues an explanation?
Hint: It's better if you understand the actual causes and are able to explain them and anticipate consequences if you want to intervene with compensation. If not, you could seriously damage the work environment.
3.   You want to improve sales performance through negotiation training courses. Logically, you want to offer that training, which is quite expensive, to those salespeople who are most likely to take advantage of it and improve their sales. You can segment the team with clustering algorithms to anticipate who will be the ones that will get the most out of the training. Do you give the training only to the ones that the algorithm predicted that will respond best, without any explanation?
Hint: Be careful about the effect this could have on those who weren't chosen for the training!
If you're not able to anticipate the full impact of your optimization actions, it's very probable that you'll end up like a sorcerer's apprentice, meddling in things you don't understand.
Wisdom consists of the anticipation of consequences. Norman Cousins

How can you predict the consequences?


Greatly simplifying the strategy, there are three paths that can help in anticipating what could happen in a People Analytics intervention:
1) Talk to them! There are people in the organization who can tell you what the consequences might be for using an intervention on different groups of people. So, talk to them! They'll tell you what areas they see effected and how, and they'll give you their opinion about what will happen. Now you have a new hypothesis. Try to substantiate this "I believe I know" with validated knowledge.
2) The past helps predict the future Look at what happened in the past to anticipate what will happen in the future. Though it may not be the same type of intervention, look for data that reveals the consequences of a similar action and learn from it. What effects did a rise or reduction in salary or other compensation have on productivity, turnover, or absenteeism? You have data, right? Analyze it. What you learned from the past will help predict the future.
3) Your own evaluation. It hasn't been done before. Nobody knows anything. It's OK. It happens sometimes. You have your own criteria; don't rule it out. An initial opinion can be valuable. All Bayesian statistics, which are gaining territory daily over classic frequentism, rest on assigning a probability that something will happen before having any relevant evidence. That is to say, the probability is interpreted as a reasonable expectation, or as a quantification of a personal belief. Don't worry, the results will mold this personal quantification and will get the model closer to reality.

Good and bad algorithms

We have to make decisions about improving the organization. Several algorithms can help the process.
The predictive analysis process identifies patterns that represent the relationships in the data, through algorithms like regressions, clustering, neural networks, decision trees, Bayesian networks, etc.
The patterns that are discovered can be used to predict employee performance, behavior, attitude, or predisposition to different job challenges. They can also be used to predict employee progress over time or identify the best profile among different job candidates for a certain need.
However, I argue that understanding the causes of the problem and the consequences of your actions are much more important than the accuracy of the model that the algorithm generates.
To make a decision, you have to go beyond the effectiveness of an action measured only by its accuracy level. You have to calculate the costs and benefits of a correct or an incorrect decision. Often, the predictive analysis in HR ignores the consequences of the recommended action, especially in terms of collateral damage.
You have to understand that there are gray areas. You can't anticipate if you'll end up in a lopsided situation from a cost-benefit perspective. If the action fails, it could generate a greater loss than the benefit provided that it succeeds.
For example, you create a turnover model with an 80% accuracy rate. Is it enough? It depends. If the risk of being wrong is small, then the accuracy rate could be good enough. If not, it could be too high of a risk.

Then what do you do? Three conditions

1.   You need models you can understand. In addition to improvement, you have to understand why you're doing it, what the actual causes of the problem are, and justify the intervention. When you have models you can interpret, you can push for effective changes in policies, directing corrective actions where they're needed.
2.   You should anticipate the consequences of your actions. This simulation is only possible if you understand where you're going to intervene and can anticipate (simulate) what this intervention could cause in areas like performance, absenteeism, or turnover.
3.   When improvement actions aren't consistent, and you act piecemeal (only with certain individuals, and not equally with the whole staff), you'll also need your changes to seem reasonable to the people in the organization. You'll have to explain to your colleagues why you've made certain changes and how you've identified the areas in need of the intervention.
As a general practice, you generate more than one model with different algorithms. Some of these techniques are easier to interpret than others. Some algorithms could be more exact. But if they're "black boxes," they won't meet the three conditions spelled out above: identify causes, anticipate consequences, and explain the actions.

Defending (timidly) the maligned stepwise multiple regression

The biases and deficiencies of stepwise multiple regression are well documented in the statistical literature. The main drawbacks of stepwise multiple regression include the biases in the estimation parameters and the inconsistencies among the model selection algorithms.
However, a review of articles published in 2004 by three leading ecological and behavioral magazines suggested that this technique is still being used. Of sixty-five articles where a multiple regression method was used, 57% of the studies used a stepwise procedure ("Why Do We Still Use Stepwise Modelling in Ecology and Behaviour?").
Stepwise regression is a variant of multivariate linear regression. Essentially, it does a multiple regression several times, each time taking out the weakest correlated variable. In the end, the remaining variables best explain the regression. The only requirements are that the data responds to a normal distribution and that there isn't any correlation among the independent variables (there is no multi-collinearity). This algorithm meets the three conditions stated above and also simplifies the model: It chooses the model that provides the most information with the least amount of variables.
Recently, at Workforce Analytics 2017 in London; Kevin Dickens, an expert analyst from Experian said in his presentation ("Using Business Analytics Capabilities to Predict People Risks") that the turnover models they had very successfully prepared used exactly this type of stepwise regression to get very simple models.
The maligned stepwise regression helped Experian earn 10.5 million pounds per year in the United Kingdom.
Other two simple algorithms (outside the HR turf)
Thin about the Apgar test, which has saved the lives of thousands of newborn babies, simply by combining five variables that evaluate their heart rate, breathing, reflexes, muscle tone, and coloring. The evaluation of each one of these variables with a value of 0, 1, or 2 points creates an index that has an extraordinary capacity to objectify a newborn's risk.

Finally, because it could be a life lesson, allow me to bring back Robyn Dawes's algorithm. She's the author of the article "The Robust Beauty of Improper Linear Models in Decision Making" that states:
"A couple's stability can be reliably predicted using this simple formula: Frequency of lovemaking minus frequency of quarrels."

Takeaways

1.   In HR, you need models you can understand. In addition to improvement, you have to understand why you're doing it, what the actual causes of the problem are, and justify the intervention.
2.   You should anticipate the consequences of your actions.
3.   When improvement actions aren't consistent, and you act piecemeal (only with certain individuals, and not equally with the whole staff), you'll also need your changes to seem reasonable to the people in the organization.
4.   As a general practice, you generate more than one model with different algorithms. Some of these techniques are easier to interpret than others. Some algorithms could be more exact. But if they're "black boxes," they won't meet the three conditions spelled out above: identify causes, anticipate consequences, and explain the actions.