|Identifying At-Risk Officers: Can It Be Done in Corrections?|
|By Jack Harne|
The following article has been reprinted from NIJ Journal No. 278, posted February 2017.
A collaboration between researchers and a corrections agency shows both the promise and the challenges of conducting research in the real world.
In 1981, the U.S. Commission on Civil Rights recommended that all police departments create early warning systems — also known as early intervention systems — to identify officers who are at risk or who may pose a risk to others. Although the main motivation for the recommendation was to protect the public, these systems also protect officers' well-being by addressing the underlying causes of misconduct (e.g., stress related to family or financial concerns).
Some departments have gone a step further, adopting a performance management information system (PMIS) that addresses potential issues with performance and conduct. A PMIS performs three critical functions. First, it identifies any officers who may be at risk for poor performance or misconduct. Second, it provides the opportunity for counseling, training, or other interventions to assist the officer. Finally, it monitors the officers' behavior and performance to gauge the success of the interventions. The earlier an at-risk officer is identified, the better the chance of a successful outcome.
A PMIS uses mathematical algorithms to identify at-risk officers. These algorithms consider a number of possible indicators of performance or conduct issues, such as absenteeism, complaints from the public, excessive use-of-force incident reports, and number of arrests or citations written. Research suggests that the factors monitored and the thresholds for flagging problem officers vary among departments.
There are few evaluations of the effectiveness of PMISs, and the findings have been mixed. Some studies have found reductions in at least some outcomes. Others have found that effects could not be attributed to the implementation of a PMIS.
Law enforcement agencies use early intervention systems widely; corrections agencies have yet to do so. The RAND Corporation partnered with a sheriff's office in Florida to examine the application of a PMIS to a corrections agency environment. The sheriff's office comprised both law enforcement and corrections roles, allowing the researchers to compare the two and apply what law enforcement already knows about PMISs to corrections.
The first phase of the NIJ-supported project was to identify potential indicators for misconduct. Researchers compared officers who had been disciplined — including officers who had been terminated, demoted, sent "last chance" letters, suspended for five or more days, or suspended for less than five days but for a criminal offense or who had resigned while facing potential criminal charges — with matched officers who had not been disciplined.
The second phase was to design a deployable PMIS that:
The real-world constraints of process and data, however, complicated the research effort.
The researchers drew archived information from four electronic sources (internal affairs records, command counseling forms, training records, and insurance claims records) and one paper source (insurance claims records). These data sources were designed for management, not research, so the researchers often had to analyze and clean the data before they could use it. For example, they had to resolve what appeared to be contradictions between data sets but often turned out to be differences in how the data was recorded. The researchers also had to condense or clean data sets that contained superfluous categories or irrelevant data (e.g., in insurance claims records) and code narrative data (e.g., descriptions in internal affairs records) to facilitate analysis. These steps required a significant amount of additional time and resources, which led to a considerably longer timeline for the research project than initially planned.
The researchers' decision to limit their analysis to data that the agency had already collected was also important. In theory, this would make the PMIS more practical and reduce the costs of implementation and use.
However, using only available data led to an important constraint: The researchers did not search for indicators outside of the existing data sets that might be even better predictors of correction officers' future behavior, such as use of discriminatory language toward inmates or how often officers drew their weapons. If the researchers had identified these kinds of indicators and included them in the PMIS, the agency, in turn, would have to collect the new data on an ongoing basis, increasing implementation costs and barriers to use. Although the trade-off between near-term practicality and broader exploration is not unique to this project and can be a challenge for all research performed in an operational context, it shaped the prototype PMIS.
The researchers identified the following potential indicators for corrections staff:
The next step was to test the PMIS and determine how accurate it was at flagging the officers who had been disciplined (and not flagging officers in the comparison group). The researchers found that the model detected 67 percent of the disciplined officers in the sample group. The PMIS also produced a 15 percent false positive rate (i.e., 15 percent of the control group officers who were not disciplined were flagged).
Researchers from the Johns Hopkins University Applied Physics Laboratory (APL) are now evaluating the prototype PMIS. APL hosts the NIJ-supported National Criminal Justice Technology Research, Test and Evaluation Center.
"We're focusing on seeing how predictive the model actually is with more recent data before we recommend how they [the sheriff's office that RAND collaborated with] might actually implement it," said Rebecca Rhodes, one of the APL researchers assigned to the evaluation.
The research team will first replicate RAND's findings using 2007-2013 data from the sheriff's office and then attempt to validate those findings with more recent data. They will determine whether the model makes accurate predictions while limiting the false positive rate. Any such system should avoid flagging officers who do not need intervention, said Rhodes.
"That's always an issue when you have a predictive model, trying to maximize your true positives, which are people who truly would benefit from intervention, and minimize the number of false positives, which would be people who don't really need any intervention or training," she said.
The APL team expects to complete their assessment by early 2017.
 Gloria Izumi and Bonnie Mathews, eds., Who Is Guarding the Guardians? A Report on Police Practices (Washington, DC: U.S. Commission on Civil Rights, 1981), 81.
Jack Harne is a physical scientist in NIJ's Office of Science and Technology.
IN CASE YOU MISSED IT