Fix the Gap: Bringing Benefits to Members with Unmet Needs
Written by Corinne Stroum
Archaeologists and art historians have opined that the ancient Greek depiction of lions result from artists who had never seen a lion (Vermeule, 1972). Without having seen or told how a lion looks, the artist would fall to a familiar alternative: dogs and cats (Rambo, 1918). We often encounter this analogous situation in developing machine learning solutions for healthcare. Physicians and health plans have existing systems that perform well to identify high-risk patients from the rest of the population (cats versus dogs) but want a method to identify the members their current system doesn’t identify (the lions). These members that fail current identification are those that we say “fall through the cracks.” At Advata, we built a solution to overcome this problem: we help healthcare plans identify high-needs members that existing systems may miss.
Scenario: High Need Members and Nursing Facility Level of Care
One group of high-needs members that can “fall through the cracks” are eligible for home care. Nursing Facility Level of Care (NFLOC) is a formal level of care designation commonly used to determine if a person is eligible for Medicaid-funded nursing care in the home setting. It also determines if someone is eligible to receive other long-term services and support from Medicaid at home.
The qualification process for home care is not the result of a simple diagnostic exam or lab result. It depends on a subjective analysis that rates the level of assistance a member needs in completing activities of daily living (ADL), medication management, and condition self-management (See: Washington Assessment Code 388-106- 0055). The goal is to the assess level of supervision necessary due to cognitive impairment or limitations to independence. It is difficult to identify using structured medical records or claims data.
One strategy that a health plan may employ to identify members for NFLOC is to begin with an annual health risk assessment (HRA). The health plan may schedule a more detailed or in-home assessment with those members whose initial responses trigger rules based on a count of ADL and unmet needs. These assessments are a lengthy and resource-intensive process.
We worked with a health plan who hypothesized that more members might be eligible for and in need of these benefits than they currently saw in their population. The team suspected the HRA-driven process left opportunities to identify additional members. For instance, a member may not be eligible during the annual assessment but might become eligible months later due to a health condition change. That member might face unmet needs until his/her next annual assessment. Some members with true needs may under-report their ADL deficits or conditions requiring supervision. Predicting eligible members before HRAs would allow the care manager to prioritize cases and undertake the assessment process as soon as needs warrant additional care.
Our Pilot Story: Machine Learning to Identify NFLOC-eligible members
To help this health plan identify NFLOC-eligible members, we developed a machine learning classifier against Medicare claims. We encounter data that predicted the likelihood each member would become NFLOC-eligible in the next 12 months. We could predict the likely outcome of the health risk assessment but also saw an opportunity to find NFLOC-eligible members the current processes may miss, back to the Greek lion anecdote. We had no clear definition of members who the current process would fail: those members with unmet needs who were falling through the cracks. We trained our machine learning model using past examples of NFLOC- eligible members identified only by existing processes. How can we find these NFLOC-eligible members when they aren’t labeled in the data by the previous process (the lions)? To accomplish this, we directed attention to members our NFLOC-eligibility classifier identified as likely NFLOC-eligible but with slightly lower probability. Our hypothesis was many of these members were like the NFLOC-eligible members, but which may be distinct from the members scored with high probability – those that the processes would catch.
By directing care managers to members who are most likely to become NFLOC-eligible, we allow care managers to prioritize follow-ups on the strongest candidates for members with unmet needs, optimizing a resource-intensive process.
We tuned our model to balance the resource investment (time spent in member outreach) vs. a numeric representation of the member’s benefit. In this case, the benefit is clear: the member can remain healthy and in his/her home.
We knew our model would raise a high amount of false positives: members that the Care Manager might call who would ultimately be ineligible for services. We determined we could “break-even” in this pilot if we saw a 10% eligibility rate among the members we tagged for engagement. Within the first month, we had demonstrated a 16% eligibility rate – we demonstrated a net benefit to the member population compared to the resource investment necessary for outreach. The team of care managers expanded to take on the additional load. For the next year, these members will receive services for which they were qualified but fell through the gap. It is likely to continue perpetuity; members with NFLOC eligibility generally continue enrollment each year. It represents a significant quality of life improvement for the members.
We will begin monitoring the model’s outputs to find NFLOC-eligible members that our model identified, but which existing processes didn’t accomplish. These are the lions we knew existed but hadn’t seen. By finding these mismatches, we proposed we would be able to create a new training set of just those unique members our model alone found and thus develop a model trained to develop exclusively these missed members.