At the end of last year, our whole engineering team took part in a hack day to explore the possibilities of introducing Alexa (Alexa is a virtual assistant developed by Amazon, first used in the Amazon Echo and is the voice you hear answering the questions you ask your Echo) and the Amazon Echo into care as an exercise to better understand the growing needs of the sector. It was a day which separated us from our daily work and allowed us to adopt a different mindset. Apart from the team-building benefits, the hack day helped us to think outside the box and get creative with our thought processes.
We split into teams and came up with different use cases and tried to implement as much as possible within the eight hours we had. We learned very quickly that there are some use cases within the care sector which do not currently provide the best standard for user experience to be viable. Large data entry where the subject matter is complex proved to be unreliable, and although you can split this into smaller data entry within a dialog, we found that became a rather long-winded process compared to standard data entry into forms.
Despite finding that some of our use cases were not as compatible with Alexa as we first imagined they would be, we decided that there were still some use cases that we wanted to explore further. We sat down a couple of months after the hack day and discussed which use cases we wanted to focus on.
Once we had decided, I took one of the original hack day demos and modified it to fit the selected use case. This was then demonstrated to parts of our team and they liked it so much that we decided to introduce some of our findings to the wider care sector at the Residential and Homecare Show in June. It proved to be a fun added extra which provoked deeper thought for the future of assistive tech in care.
However, regardless of the element of fun, and as reluctant as I am to say this, Alexa (or any voice user interface) is problematic in many care environments. Not only can the technology be difficult to use but it can also confuse or distress care receivers. Even when we developed some of our Alexa skills, we found it could be hard to get the desired interaction to be consistently completed in a successful way.
The issue though, is not just directly with Alexa, but with voice user interfaces in general. It’s not just the fact that the human aspect of these interactions is completely removed but that the interactions can be frustrating and complex and if you talk to most people who’ve used them, they will echo (pun intended) similar sentiments with regards to their accuracy, reliability and security. The problem is almost certainly with the underlying technology (for example Natural Language Processing and the Intent, Entity and Dialog model) which voice user interfaces and text interfaces like chatbots use.
The findings during our evaluation of Alexa and the skills we developed have been echoed (here’s that pun again!) and further supported by evidence from an article published in the Telegraph; Alexa’s robotic voice leaving dementia patients ‘deeply distressed’, social care report finds‘.
The article is based on findings made by Doteveryone in their report ‘Better Care in the Age of Automation’. Doteveryone is a technology think tank that is dedicated to researching how technology is changing society. The main concern identified in their report was that Alexa devices were found to leave dementia sufferers ‘deeply distressed by an unfamiliar robotic voice reminding them to take medication’.
This is not to say that Alexa is a lost cause, there are still various capacities in which it can be used within the sector. It just requires more thought, imagination and time to ensure that the concerns expressed in Doteveryone’s report, and discovered in our findings, are addressed. It’s safe to say that, in light of the evidence presented, Alexa and other similar alternatives, have great potential but require more thought for implementation within the context of care. Concerns around privacy, lack of human touch and uncertainty about the time-saving nature of it, and the risk of causing distress to patients are all valid.
Regardless of the findings around Alexa, our team still enjoys discovering new possibilities and breaking the mould when it comes to creating new avenues for tech in care. If you’re going to be at the Care Show next week, 9th – 10th October at NEC Birmingham, stop by stand H35 if you want to learn more, or if you’re simply curious to have a taste of the future! And don’t worry, there won’t be any more puns.
By Grant Benbow, everyLIFE Full Stack Developer