To change your workplace, use behavioral data in the right way
Part 2: Behavioral science in the workplace
Thanks to new tools, we are now able to garner more data on employee behavior than we can even process. Where scientists once had to stand beside a worker to understand their actions, technology now enables us to digitally and objectively quantify collaboration patterns, employee focus, time spent on processes, and many other behaviors. This passive people analytics data, which I describe in Part 1 of this series, is ideal for understanding the actual actions employees take and mapping the outcomes those actions lead to.
Once problems are diagnosed, the natural next step is the piloting of interventions to change behavior toward an ideal outcome. This is where behavioral science offers outstanding value. But we need to be careful when applying this value.
Doing data right
Large amounts of data are guaranteed to show trends, even when they do not exist—for instance random fluctuations are bound to suggest a significant pattern. The standard approach of analyzing first and then asking questions about the findings has led to some significant problems when decisions are made based on the insights from one dataset that don’t necessarily transfer to anything else.
Corporate experimentation is no different. If a series of tests is run on a large group of employees, areas of interest will emerge. Yet this does not mean that the dataset is identifying any actual findings. The “insight” could instead be caused purely by random fluctuations in the recorded data.
Therefore, theory is the predicate for any research that is likely to yield valuable results. And behavioral science offers this theory to accompany all the data.
By first developing questions, which a dataset can answer, analysts position themselves to evaluate results against an initial theory and not just on speculation. In the ideal setting, one would not look at data trends without having a theoretical backing first. But when this is not possible, theory can at least direct the creation of a hypothesis post-discovery of the trends.
Another caution is how the data is manipulated. If an analyst is required to present findings from a period of data collection, they have a nearly unlimited number of ways to slice the data to find a positive outcome. One example of this is called p-hacking. The p-value represents the likelihood that the observed trend occurred randomly.
When a test is backed by a theory, you can assess whether the results are due to the theory or some other outstanding factor (another variable or random fluctuations). When applied without a theory, a test offers little value. Analysts, intentionally or not, can run a stream of tests and seek out any which have a p-value at a level low enough to infer reliable results were discovered.
In order to avoid problematic data manipulation—whether stemming from the pressure to publish in academic journals, to create insights to be shared with corporate stakeholders, or some other factor—we need to be careful to avoid the demand that experiments always yield actionable insights in order to increase the confidence garnered by the test.
Privacy is paramount
Companies must also get ahead of ethical concerns and challenges. First, privacy is paramount to creating an environment where analyses are respected and appreciated by employees. As the saying goes, “with great power comes great responsibility.” The power is great when dealing with employee data. HR is filled with Personally Identifiable Information (PII) along with many other data that most people prefer to keep private, such as salary, children’s names, and much more.
There are many strategies to creating a secure environment, including aggregating data to a team level, thereby decreasing the likelihood any individual is identifiable. If individual data is required, another option is to build systems to instantly replace PII with randomly generated tags. Ultimately, it is up to the company’s security and analyst teams to implement a strategy that both protects employee data while still realizing the value from behavioral insights.
Privacy is paramount to creating an environment where analyses are respected and appreciated by employees.
Nudge for good
Once data is secured, privacy concerns addressed, and the analysis is conducted, the change that is encouraged must be attempted with equal strategy and sensitivity. To do this requires deep consideration of what is being tested and the objective. People analytics has to do with analyzing the labor people perform. Yet taken in context, this is the work that is generally being completed in order to provide for one’s self and family. A slight nudge might seem harmless, but if it were to impact one group disproportionally it could result in significant negative life outcomes for that group.
When interventions are created, they must not only be thoughtfully designed, but measurements must be put in place to ensure that adverse outcomes, if present, are identified as soon as they appear. When interventions are exposed as causing adverse impact to a specific group they must be terminated immediately, or a plan of compensation must be created for those negatively affected. The intervention designer must remind themselves of the nature of the environments, that jobs equate to livelihoods and are not something to be casually played with.
Additionally, when harnessing the power of behavioral insights to influence outcomes, one should always “nudge for good.” Attempting to change the behaviors of workers can yield positive outcomes for both employees and employers; for instance, employee engagement and happiness positively correlate to productivity.
Planning for nudges, the designer should always ask themselves if they would be alright seeing their idea on the frontpage of their favorite newspaper. Does it have to be perfect? No. Does it have to be defensible? Yes, and it should be if a purpose of any nudge is to improve the lives of the workforce.
Transparency is the key to preventing backlash from these types of issues. Employees are generally willing to give their data to their employers but only when it is shown to benefit them. Inequities can be more easily identified when many, rather than a few, have reviewed a plan to create change.
And behavioral interventions can be questioned and altered when those who are to be exposed to them have a say. Project leaders must solicit feedback during the planning phase and be prepared to answer questions during the launch—doing so will create a smooth rollout and prevent any surprise difficulties.
Combining the capabilities of people analytics data and the explanatory power of behavioral science can transform how we collaborate and shape the future of work. As data collection methods and the theories of behavioral science continue to advance, we are just beginning to realize the opportunities.
As we harness these tools, we should always remember that it is not a perfect science and requires both experimentation and careful thought and planning around how to pursue value without losing trust with workers. These beginning articles have explained the strategic side of this combination. The next two articles will dive deeper into the tactical aspect exploring the tools and frameworks that can generate this value.