New Ways of Doing Business, New Tools...

Restructuring VA employee metrics

There has been much discussed in the press, as well as a lively veteran topic whenever veterans get together, on how the VA defines success through employee metrics in the rating area. The VA has a professed mission of “The mission of the Disability Compensation program is to provide monthly payments to veterans in recognition of the effects of disabilities, diseases, or injuries incurred or aggravated during active military service, and to provide access to other VA benefits.” If this mission/goal is to be met, than all VA processes should be structured to support such mission and areas of potential negative mission bias should be restructured to eliminate such bias.

 

One area of apparent bias that seems contradictory to the mission goals is the area of employee (rater) metrics. Currently employee success is gauged (as near as I can tell) strictly on the daily/weekly completion of cases – whether denials or approvals, it does not matter to that metric – it is only concerned with removal of CURRENT work from the rater’s desk so they then can work on the next case. It is a QUANTITY metric and not necessarily a QUALITY metric. This metric would probably be minimally OK IF there were EQUAL expenditures of appropriate work effort on the part of the rater to deny a case or approve a case. Negative bias against the veteran case may enter in though for the process because it may have a tendency for the rater to issue a denial at the first sign, or early in the rater review, of a potential denying condition, in order to make a “quick” decision and thus assist the rater in “making their numbers”, WITHOUT REVIEWING THE WHOLE CASE FAIRLY TO SEE IF A REASONABLE DOUBT CAN BE MADE IN THE VETERANS FAVOR. If such “quick”, and potentially inappropriate, decisions are made, this would also have a tendency to increase the chance that a veteran will NOD/appeal such denial, thus creating more future work that continually adds more to the backlog. In my mind, this current employee metric contributes greatly to a never-ending spiral of greater and greater backlogs no matter how many employees are added to the VA employee rater rolls to decide cases.

 

In order to combat this probably inappropriate metric, if the VA wants to continue using a similar simple method to define rater employee success, then they must restructure such metric to eliminate any potential bias. One way to do that could be to assign a “point system” as the metric instead. Points would be provided based upon the disposition of the case – a denial would need to get fewer points assigned than an approval. Different types of cases (trying to make a distinction between more complex cases) would have a different number of points – all points would be related to the average difficulty to provide a disposition of the case (each difficulty type would have a denial and a different approval number of points). The rater metric can then become the number of points achieved at the end of the day or week rather than just the number of cases removed from the rater desk. This would certainly go a long way toward removing any potential bias if done correctly.

 

Determining the appropriate points for any particular type of case, as well as the point level to reach a daily or weekly goal for a particular rater (could be different based upon rater experience level), would best be accomplished by utilizing some outside agency to research that issue (with appropriate rater employee input of course). Once such outside agency provides their recommendations, I suggest a “pilot program” at one or two VARO’s be instituted for a 6-12 month period to “prove the concept” prior to rolling it out to all VARO locations.

 

In addition to restructuring this QUANTITY metric, the VA should also consider to supplement it with an additional QUALITY metric that would determine the appropriateness of the ratings adjudicated by the rater. Whereas the QUANTITY metric would be sufficient for establishing daily/weekly/other time period ongoing goals that tend to assist workload completion and work toward lowering the backlog, the QUALITY metric would be more important to determine whether any end-of-year "perks" such as bonuses were due. Probably the easiest Quality metric to establish in the rater area would be to determine if any rater actions were done that detracted from the quality of their decisions. Some things, not an all-inclusive list, that could be used to do that are:

1) Vet filing a NOD/appeal is a Quality minus (denotes customer dissatisfaction)

2) A remand filed by the BVA for some rater inaction or action is a Quality minus (denotes rater failed to apply appropriate regulation/law to the case)

3) et al.

The above items are just examples and the VA would need to decide if they are appropriate and will assist a Quality metric.

 

The VA, or through using an outside agency for recommendations, would determine the lowest acceptable Quality for an individual rater to have in order to obtain the first bonus level (if quality less than that no bonus) AND there could be higher bonus levels that reflect higher Quality. There could be different Quality levels established based upon rater experience (i.e. time in job, position title, etc.). Probably this determination would only be made once a year, BUT assigning the Quality "minus points" would be an ongoing process. Supervisor/Manager bonuses could also then be determined by the overall Quality shown by their respective team AND a QUANTITY/QUALITY metric related to their actions in reducing the workload (that additional Manager metric not fully discussed here). This then equates all VARO level and below bonuses to some level of Quality rather than just a level of Quantity (which can tend to provide bias and loopholes through inappropriate "Process adjustments").

 

Both QUANTITY and QUALITY metrics are needed in order to really gauge rater employee success.

 

Implementing these two metrics as outlined above should lead to money savings for the VA:

 

QUANTITY metric outlined: Provides a tendency for raters to spend the appropriate time on a case and therefore lesson the chance of vet dissatisfaction that leads to NOD/Appeals so consequently avoids creating a larger backlog so this is a cost-avoidance.

 

QUALITY metric outlined: Facilitates accuracy so also lessens the chance of vet dissatisfaction that may end up adding to the backlog AND also, if done correctly, can provide savings by virtue of only providing appropriate bonuses. This is both a cost-avoidance AND a cost-savings measure.

Tags

Voting

17 votes
Active
Idea No. 84