Driven by its estimate that 400 million people in lower and middle income countries lack access to essential health services—in part due to a projected shortage of 13 million health workers by 2035—the WHO recently published guidelines on self-care.
These guidelines define self-care as “the ability of individuals, families and communities to promote health, prevent disease, maintain health, and to cope with illness and disability with or without the support of a healthcare provider.” Self-care is now seen as one important path towards achieving universal health coverage.
In the pursuit of universal health coverage (UHC), there are many forms of direct-to-client solutions in development, such as mobile apps and chatbots.
These chatbots are often in English, available on channels such as Facebook and WhatsApp, and feature a simple menu of topics to choose from that could include a variety of health-related information—from myths about COVID-19 to guidance on using a contraceptive method.
The goal of these chatbots is to provide information for users to apply in a specific context for themselves. They also remain as tools for users to return to, in order to report on side effects they may have faced or follow up on how long they used a new method. In short, these solutions are built to drive positive health behaviors.
Chatbots in particular have never been easier to make. There exist a number of self-service platforms today that allow for chatbots to be rapidly built and launched. But just because a chatbot or mobile app can easily be built does not mean it can easily affect positive health behaviors.
In this blog post, we break down some of the assumptions we might make in developing chatbots for UHC.
Chatbots and the Right to Health
In designing chatbots, I’m often reminded of a time before WhatsApp, around 2011, when I stood in front of a classroom of teenage girls at a high school in a small village in Lesotho, tasked with teaching a sex education class.
Other than the school, the only buildings in the village were a church and a small clinic. A sole nurse staffed the clinic a few days a week, where she would give out HIV medications and monitor the labor of pregnant women who had walked miles from even smaller villages. For my class, I was given a small textbook that talked about the dangers and symptoms of HIV and how to protect oneself to prevent the transmission of HIV.
As an Indian woman who had never received a formal sex ed class—Indian schools had no such concept at the time, and very few do even now—I felt distincly ill-equipped to talk to my students about safe sex. Not only did I not have the training, I also lacked their context. What outcome could I expect by reciting the dangers of HIV to my students, most of whom had lost their own parents, siblings, or whole families to the disease? What good would it do to talk about using condoms when there were no condoms available at the clinic?
Despite our vastly different contexts, all of us girls and women in that classroom had one thing in common: We all belonged to lower-middle income countries whose constitutions did not explicitly guarantee the right to health as a fundamental human right. Around the same time that both countries were in the process of becoming independent of British rule, they were signatories on the WHO Constitution of 1946, which envisaged “the highest attainable standard of health as a fundamental right of every human being.”
Yet in their own constitutions, similar to those of many lower- and middle-income countries, the right to health was not included. In India, for instance, it is not the legal obligation of the state to provide healthcare for its citizens.
More than half a century later, driven by the estimated 400 million people in LMICs who lack quality health services, the WHO’s self-care guidelines suggest that individuals who grow up without experience or expectation of the most basic of health services should be able to prevent disease and maintain their health without the support of a healthcare provider.
But can a chatbot really help accomplish this?
Our Assumptions in Designing Chatbots
Could a student in that same school in Lesotho today use Facebook on her phone not just to share pictures but also to learn about birth control? When she grows older, could she use it to decide to switch birth control methods if she wants to get pregnant? When she becomes pregnant, could its reminders convince her to take folic acid supplements and go to antenatal appointments? Or if she becomes pregnant and doesn’t want to be, could it safely guide her through a self-abortion process?
To answer “yes” to any of these questions requires a number of assumptions:
- that she will know that this type of solution exists
- that once she finds it, it will be in a language that she can easily understand
- that once she tries it, she will find it easy to use, even though she may never have seen or used an FAQ list before.
- that her environment—both power dynamics within her household and infrastructural challenges outside of it—enables her to access the services or products she is recommended, such as antenatal appointments or birth control.
- that she will want to use the solution again, that she will take from it a recommendation, and revisit it to let it know the side effects she may have encountered, and follow up to understand what she should do next.
Would an individual turn to a phone application, and then return to it again, to receive a service that they may have never gotten consistently from a human?
Dimagi’s Findings on User Retention and the Way Forward
There is limited evidence to demonstrate the effectiveness of chatbots in enabling the continuation of care, but preliminary indications suggest that there is potential for the delivery of such care.
Dimagi’s first chatbot, called Poshan Didi and funded by the Bill & Melinda Gates Foundation, focused on providing nutrition counseling for 100 women in Madhya Pradesh, India. Deployed on Telegram, the bot saw almost 50% of users return to talk to Poshan Didi in at least five distinct user sessions.
In deploying Nurse Nisa, a chatbot made by Dimagi and Ipas that guides users on contraception and self-abortion topics via WhatsApp in India, Kenya, and the DRC, 12% of users returned the week after they first used it without receiving any reminders to do so. Of those, almost 50% came back the third week, and 17% kept returning after a month.
Both of these examples saw Ipas and Dimagi team members work closely with community members, onboarding them onto the chatbots. At scale, such personalized onboarding might be challenging and costly but necessary.
An alternative to chatbots might be to invest in online advertising to capture the attention of our target audience and develop engaging tactics that keep their attention the moment they click. An FAQ or menu-based chatbot such as Nurse Nisa—in which users see a list of numbered topics and type a number to learn more about it—may bring 12% of users back after the first week, but different formats, such as stories or games might draw result in even higher user retention, especially if presented at the very start.
This fifth assumption—that users will return to use this kind of chatbot—will always be tied to the assumptions that precede it: that users know of the solution, that they can access it in their language of choice, that they find it easy and intuitive to use, and that their environment has the conditions necessary for them to utilize the solution to their advantage.
The challenge for designers, implementers, and researchers of such chatbots will be to develop solutions that proactively take into account these barriers to access. Many more girls and women have access to phones today than they did ten years ago—but many still do not.
Assuming that the rates of phone ownership continue to increase and that chatbot and direct-to-client solutions are necessary to expand universal health coverage, then they must make for more engaging and contextual interactions than a teacher reading from a textbook that no one thinks to open once class is over.
In the last two years, Dimagi has launched six conversational agents to enable behavior change in five countries. These agents focus on a variety of areas, such as personalized nutrition counseling for mothers in Madhya Pradesh, economic empowerment for women in Rwanda, building resilience among health workers in Madhya Pradesh, and COVID-19 infection and prevention control for health workers in Jharkhand.
Our partners in this effort include Ipas, USAID, the Bill & Melinda Gates Foundation, and the Johnson & Johnson Foundation.
Send an email to info@dimagi.com, and let’s figure out how chatbots can drive positive behaviors for your target audience.