Thomas Provan, strategist and Kaylene Kau, senior UX designer and strategist, Recipe Design, discuss how some of the latest developments in AI can benefit healthcare.
As the exponentially increasing power of computer systems allows for countless innovations in diagnosis and treatments, Artificial Intelligence (AI) will continue to revolutionise healthcare. There is, however, a barrier that AI will have to overcome to be truly effective, and oddly, it is a very human barrier – Trust. AI, a faceless automaton comprised of lines of code and presented to us on screens, has the potential to communicate with people empathetically, in a way that builds trust and understanding. But first, people need to trust AI enough to begin using and interacting with it.
Technology is pushing the boundaries of what we trust when it comes to our health information, particularly with mental health and symptom reporting. In 2015, a group of researchers at the Institute of Creative Technologies in the University of Southern California created “Ellie”, a pseudo-therapist AI. Ellie uses facial recognition software, eye tracking, speech analysis, and a plethora of other technologies that allow for a deeper understanding of the patient. Interfaced using a reactively animated on-screen persona, Ellie asks the participants questions and changes her body language depending on the answers given. She smiles when the participant smiles, she leans in when the participant leans away, all to reactively initiate trust. Yet, for all her human qualities, the artificiality of the experience removes the direct fear of judgement from the user. During trials diagnosing US military veterans with PTSD, it was found that participants were more likely to share symptoms of their mental health with Ellie than on trauma assessment forms. Whilst never used in isolation, this kind of smart and empathetic healthcare can be invaluable, as an artificially empathetic diagnosis tool that patients can trust and provide with their sensitive information.
Human beings aren’t the best at conveying information about our health. Medical professionals, despite years of training on “bedside manners,” can find how patients communicate their conditions and symptoms difficult to decipher,as fear of judgement can prevent patients from being honest about their health. People can be wildly inaccurate about their lifestyle, but technology can circumnavigate this problem through monitoring biometric and health data, whilst offering a ‘private’ way of recording our symptoms.
This hidden opportunity for smart healthcare can be made or broken in how we interface with it. To be trusted, AI must first be deemed trustworthy. This can be achieved through thoughtful design, user education and consideration for how patients derive meaning from their healthcare experiences. Smart healthcare systems must identify and respect the many expectations, fears and requirements each user has, in order to effectively respond with holistic solutions.
The trustworthiness of AI will hinge on many different factors. One example of a key tension at play is navigating the balance between a patient’s fear of human judgement and their desire for human comfort. AI can alleviate patient reservedness through anonymity, whereby patients may feel more comfortable disclosing personal medical information to a ‘non-human’. However, replicating the established codes of human, emotional warmth is decidedly more difficult. We can hypothesise that healthcare practitioners (HCPs) and AI healthcare systems will each have roles to play in the future of healthcare, playing to their respective strengths. For instance, it may be that AI healthcare systems are more effective in the early, triage stages of diagnosis and treatment, before patients have more need to interact with a human medical professional.
When designing smart healthcare systems, considered design choices can provide an experience that feels non-judgmental, whilst conveying the cues of trustworthiness and medical expertise. Patients will require a combination of these factors to willingly offer their personal information and feel comfortable taking advice. These design strategies can be diverse and varied. Anything from the Ellie-like interactions of smart healthcare interfaces asking reactive questions, to suggestions based on information gathered on the user.
The system behind Recipe Design’s conceptual smart metered-dose inhaler (MDI) device – ADD•FLO – uses information about the user to provide meaningful suggestions for preventing asthma exacerbations. Smart connectivity provides the user with suggestions based on their behaviour or environmental conditions, for example, warning the user of potential allergens and asthma triggers through aggregated data analysis. Providing this pattern recognition, combined with useful, ‘everyday’ asthma reminders, builds empathy by making the user feel understood, whilst providing a sense of control and the ability to positively impact their health.
Additionally, one of the most important ways of communicating trustworthy advice is by showing the AI’s “thought process.” Much of the distrust of technology comes through misunderstanding or fear of misuse of data. However, if the system is meaningfully designed to show collected data in an easy, understandable way, patients might be far more willing to interact and trust the smart healthcare system they are using.
An example of this is symptom-tracking app ADA, which doesn’t hide its calculations. Rather than providing information from the mystery of its algorithms, communications are provided with reasoning as to why those suggestions are being made. Even when inputting information, ADA explains why the information is necessary and what it might be used for. This transparency helps to facilitate more meaningful healthcare interactions and develop trust with users by removing the fear of veiled judgement.
AI, when thoughtfully designed for trust and compassion, can help unlock opportunities for smart healthcare. It can allow users to improve the ways they monitor and track their health, whilst supplementing professional diagnosis and treatment plans. AI healthcare systems have the potential to become the bridge between healthcare at home and in hospital, generating rich, reliable data sets and removing the need for in-person assessments.
Key to ensuring these systems are successful, both with users and commercially, is to conceptualise, design and develop with trust, care and compassion for patients in mind.
Originally published by MedTech