The Irish Medical Times - Dr Robot: Artificial Intelligence 4 Heart Failure

 



Let’s think bigger - Imagine if you could tailor your own healthcare journey. 


If a trailblazer like Florence Nightingale was around today, she would get Google to turn on her trusty lamp, and she’d start to explore how AI could personalize care for a patient. 


Imagine if we could harness the power of AI to manage heart failure patients? The European Heart Network is on the case. 


I’m no Florence, but in September I was asked if I would like to attend a workshop in the European Cardiac Society HQ in Brussels, to discuss the future of AI in Heart Failure from a patient perspective (AI4HF). The event was arranged by the European Heart Network (EHN), the day would be about, with, and for patients. 


Not accustomed to any developments in medicine starting with the patient, I kept expecting the email to be an early Halloween prank - halfway through reading it a ghoul would appear on my screen with a shrill scream that would be echoed by my own, followed swiftly by the sounds of my defibrillator going off. This, thankfully, did not happen. I managed to get to the end of the email without so much as the opening bars to the ‘Monster Mash’ playing. 


I immediately launched into AI mode and checked the weather in Brussels, 31 degrees celsius, I looked out at the passive aggressive drizzling rain in Dublin, “Yes, I am free on those dates” I typed. 


The night before the workshop a dinner in the heart of Brussels was hosted by the European Heart Network’s CEO, Brigit Berger. The dinner attendees were strikingly similar to a Eurovision line up - Dutch, Flemish, German, Romanian, Spanish, Portuguese, Canadian (?!) and a healthy dose of Irish. There was a febrile feeling that at any moment someone might burst into Abba or Johnny Logan’s ‘Hold Me Now’. 


What really struck me was the age group of the heart failure patients, we were all young (by young, I mean, everyone had most of their own teeth). Like many people with a shared trauma, we bonded over stories of broken hearts and were all set for the next day’s workshop.


From the minute we arrived at HQ, everyone was incredibly warm, welcoming and kind. They got straight down to business (after coffee and pastries, of course - this is continental Europe for God’s sake, there’s always time for an espresso). 


They talked to us about the aim of the AI4HF project - to co-design a trustworthy AI solution for supporting heart failure care and management. The organizers struck a great balance of keeping us on topic but also allowing the patients to express themselves in a safe non-judgmental space. Patients talked of how AI could be positive, such as a symptom tracker app and monitoring mental health. 


We moved on to the topic of what frustrated patients, we could have put the coffee pot back on and talked about that for a week. Two of the main frustrations that surfaced were patients feeling dismissed by doctors (even now, there is a doctor reading this thinking ‘That’s not true’) and concerns around the lack of integrated care. 


The project team was keen to ask us our thoughts on ethical issues around AI in heart failure. I was straight in with ‘bias’, this is a pet peeve of mine, I have written an article about this where I ask ‘How Woke is Artificial Intelligence?’ The AI4HF project felt assured that the big data they were working off was clean data. Another ethical point raised by a patient in the room was the accuracy of data. If a patient has to type in data, the potential for error is huge. Even the thoughts of moving to AI was flagged as intimidating for lots of patients (and clinicians). 


There were very strong feelings from the patients that they should be in charge of their own information. Giving family members access is ambiguous, as they cannot contextualize the information the same way the patient themselves can, but where a family member is a caregiver perhaps it is beneficial that they understand where the patient is in their journey. Issuing health predictions to a patient, particularly bad outcomes, was felt by some to be off putting.  


What about a risk management tool? Now we were getting down to business. An app for symptoms of heart failure was discussed. We wondered if it could be captured in a wearable, to avoid having to input any data, let it all be seamlessly pulled as you went about your day. That’s the ideal situation, but unfortunately the technology doesn’t seem to be there yet (one day!). An app seems like the most feasible option, and versions of this do exist in the private sector. We decided it should look at the acute risks facing patients including weight gain, fluid retention, palpitations, breathlessness, chest pain, blood pressure changes, etc. 


Another idea floated was a shared decision making tool. The group envisioned a format with visual recommendations using colours, shapes, etc, not just reams of text. Decisions around medication, lifestyle and intervention could be presented in this visual interface to promote an immediate ease of understanding. 


The conversation, like the ailment we all shared, was complex. Many ethical, legal and social issues will require further discussions, for example, data sharing. 


A poignant question was asked about reluctance within the medical community to onboard with AI. The long play theory is that it will become embedded in medical training and as time goes on the physicians who use AI will replace those who don’t. What happens in the meantime feels like the painful part we call ‘change’. I recently changed my milk from dairy to nut. It was an ordeal, not so much for me, but for my family who are refusing to onboard, I am currently interviewing for replacements. 


A salient point was made that patients probably use LLM more than doctors (large language models). I flagged what I had found in literature that LLM can be used to compare medications and highlight contraindications. A patient could even create a treatment plan using their own prompts, then ask the doctor to compare this AI plan to the current plan and see if any tweaks could be made to their care. There is evidence that the Chat bots make things up, but working in conjunction with a medical doctor, it could help save lives. 


There was a strong desire to create a ‘trustworthy’ AI4HF tool. One of the patient’s echoed the famous quote “Trust takes years to build and seconds to break”. People were at different points on the trust spectrum when it came to doctors. In our small group, most of us were ok with healthcare providers having access to our information via an app. Some patients were keen for medical information to be available from the app if suddenly needed e.g. a patient collapses. 


One of the patients reduced the room to laughter when she declared her absolute unwavering confidence in her own cardiologist when she proclaimed “If he asked me to go out and fight a wolf, I would!” 


This sparked a conversation where one of the patients shared that she had her cardiologist's personal number and could call him directly if needed. She said, very poetically, that he knew every crease in her embattled heart. This story was in contrast to other patients who wondered at times if their stretched to capacity cardiology team even knew their name!


We left the experience on a high, with new friendships forged and all felt a little less alone in this heart failure journey. The project is ongoing with patient involvement a contingent part of the progress. 


There is also a very good chance we will join forces and enter the next Eurovision as its first ever AI contestant (first official one at least).  



<<PREVIOUS POST          NEXT POST>>



Comments