A woman wears a device which looks like a watch but measures her heart rate and glucose levels. Data it collects is automatically uploaded and used to make a report and graph that can be analysed by her GP. The device can also contact emergency responders if the information it collects indicates the woman needs help.
A man sets off for his regular morning run wearing his fitness tracker. He feels great, heart pounding, lungs filled with fresh air. However, when he returns home, he sees a window’s been forced open and his place has been ransacked. What are the chances, he thinks, that a burglar chose that 40-minute opportunity to strike? Well, depending on which privacy settings he has enabled on his tracker, the odds might be higher than he realises.
Even if not all this wearable technology is here, it’s just around the corner. In 2015 Google even filed a patent application to develop contact lenses which can detect diabetic blood sugar levels and report on them without the use of a needle. Some industry experts predict this type of technology could net $100 billion annually within ten years. In New Zealand we’re seeing a steep increase in availability and desire for “wearables”, a term which includes fitness trackers, smart-watches, even lightweight fabrics that collect data.
However clinicians, health agencies and patients should be aware that wearable technology often collects more information than people realise. Anyone with a Facebook account has probably taken a glance at its terms and conditions before signing up but many of these devices collect and synthesise amounts of data that make Facebook look mild by comparison.
Rule 3 of the Health Information Privacy Code says that agencies marketing these devices must inform individuals their devices collect this type of information – but it’s easy to click past endless pages of terms and conditions. Many of the functions are pre-enabled on the devices themselves. This means, if consumers want to maintain control over the information a device collects – and how available that information becomes – they have to interpret the risks associated with each function and, in many cases, manually disable the functions they don’t like.
Privacy risks from wearables won’t necessarily be obvious. As well as risks, there are benefits to most of the functions enabled on wearables. The man with the fitness tracker loves the features which record his morning runs – he uses the information to inspire him to increase his activity and achieve his fitness goals. His tracker monitors his speed, heart-rate, and distance. The green dot on the real-time map of his route gives him immediate feedback during each run. What he doesn’t realise is that that when he syncs his tracker with other devices (laptop, smartphone) the information is also being uploaded to a site, where other runners can compare their speeds and routes with each other. In his case, one of those other users had paid attention to his regular morning runs, looked up his address, and used the opportunity to relieve him of his personal possessions.
Interpreting a risk – and addressing it -- is not a simple case of scrolling through a list and opting out of features you don’t like the look of. Syncing a device may occur automatically when you connect to your Wi-Fi, and this is often necessary for software to be useful. Mitigating this risk is not generally a one-step process. Changing the settings on the device might prevent the information from being directly uploaded to a cloud service, but there are also options which could help prevent misuse while still allowing the user to compare his activity with people he trusts.
You can expect many of these devices to offer benefits that can be life-changing for patients, making identification of risks a matter of in weighing up the pros and cons. For the woman monitoring her glucose levels, her watch-like device records potentially life-saving information in a way that doesn’t interfere with her lifestyle.
But what if the device was given to her as a signing incentive for a health insurance policy? In the US, some insurers are offering discounts to policy holders who use wearables given to them by the insurer. The information collected allows the companies to offer further discounts, based on behavioural factors indicating reduced health risks. It also lets the insurer analyse more information about specific claims and, doubtless, in some cases, to reject claims.
Again, the onus falls on individuals to understand what they’re signing up for, which can be more complicated than it might appear at first glance. There may be other agencies interested in the data, so it’s important to carefully read the terms and conditions of these devices, the apps they use, and the sites that support the information. Imagine if ACC could prove that an individual wasn’t doing what they said they were at the time they suffered an accident. There may even be clauses in agreements which state that disabling functions on the devices is a breach of contract.
What can health agencies do to prepare for the projected increase in volume of information and future needs regarding access to and storage of that data? Now, not later, it’s time to consider the sea-change.
Preparing a privacy impact assessment is an excellent way to anticipate changes you need to adopt, and as the technology evolves, you can consult this assessment and adapt your policies and procedures. Our office has produced guidance around issues such as the cloud, and network security.
Accept that this technology is a reality and start considering the effect on your practices as early as possible. Familiarise yourself with wearable technology and, if you use it yourself, take a vigilant look at the terms and conditions, and what features are enabled on your device. Assess what you would do if a patient came to you with a bunch of data from their wearable monitor – could you interpret it? Can you trust it? But if the answer is no to either those two questions, can you afford to ignore it?