They Know When You Are Jogging, They Know When…
You may have never heard of a “heat map” until recently, but chances are you have now. News accounts are all over a finding — by a 20 year-old Australian student on summer break — that such a map by fitness tracker Strava showed locations of users all over the world. Including, in particular, U.S. soldiers and possibly covert operatives in combat areas.
But let’s not blame Strava. This is a way bigger problem.
According to The New York Times, Nathan Ruser wasn’t looking to expose locations of our military. He isn’t even a Strava user. He studies international security, and follows the war in Syria closely.
His father had made an off-hand remark that Strava heat maps — which show activity and locations of users of their fitness apps — show “where rich white people are.”
So, as he told The Washington Post, “I wondered, does it show U.S. soldiers?” (My first thought of “rich white people” wouldn’t be U.S. soldiers, but you have to give the kid credit anyway). He immediately zoomed in on Syria. “It sort of lit up like a Christmas tree.”
Once he realized what he found, he and others began to understand the extent of his findings. It isn’t just Syria, it isn’t just U.S. military (which has encouraged the use of devices like FitBits), and it isn’t just American citizens. Strava claims some 27 million users, and they are all over the world.
The heat map doesn’t care if you are on a classified base, working at the Pentagon, or maybe just playing hooky from school or work. The maps aren’t produced in real-time, thank goodness, but the data that generates them is, and even if produced after the fact they could be used to establish locations or patterns that we might not want known.
The military says they are “in the process of implementing refined guidance on privacy settings for wireless technologies and applications,” and Strava says it is “reviewing features that were originally designed for athlete motivation and inspiration to ensure they cannot be compromised by people with bad intent.”
OK, good. But, as developer Steven Loughran told Wired: “The underlying problem is that the devices we wear, carry and drive are now continually reporting information about where and how they are used ‘somewhere.”
Maybe you didn’t join Strava. Maybe you don’t even own a Fitbit. Chances are you do have a smartphone, tablet, or computer, so you are at risk. But what if you have a pacemaker or other Internet-of-Things (IoT) device implanted?
That is the dilemma posed by Neta Alexander in The Atlantic. She unexpectedly had a pacemaker implanted, and has only gradually come to realize some of the concerns it raises. She quotes Lior Jankelson, a physician at NYU’s cardiac center: “there are at least tens of thousands of Americans with cloud-connected devices that could be monitored from afar.”
That’s good, right? You want your cardiologist to know how your pacemaker is doing. You probably even are OK with the device manufacturer tracking it, in hopes it will help them monitor defects. This is the power of the cloud, and the hope of Big Data.
The trouble is, they’re not the only ones who might be able to monitor it. For example, hackers who now demand ransomware from hospitals for their records might certainly find IoT devices even more inviting.
Even more frustrating, Ms. Alexander found that gaining access to the data her device was generating was problematic:
I was told I would have to sign a release form and wait for its approval before the data could be sent to me (via postal mail, no less). The process might take several weeks, and I would have no way of knowing whether the data delivered would be partial or complete…gadgets inside one’s body are gradually shifting control of personal information from users to corporations.”
She goes on to complain: “The potential threats posed by hackers are distressing, but so is the notion that my pulse has been monetized.”
As it has oft been characterized, when it comes to services that collect our data, we’re not the customer, we’re the product.
We know this is a problem. To some extent it is our fault, by not really paying attention to the data policies of services we use, and by not demanding more protections for our data. We could each do more.
Still, Zeynep Tufekei argues that:
Data privacy is more like air quality or safe drinking water, a public good that cannot be effectively regulated by trusting in the wisdom of millions of individual choices. A more collective response is needed.
The European Union is trying to address the problem with the General Data Protection Regulation, due to take effect May 25. It imposes restrictions on what data can be collected, stored, and use by tech companies, who are scrambling to figure out how to operationalize.
The regulations won’t solve all the privacy problems, but they go beyond anything the U.S. is doing. Shame on us.
Look, things are going to get worse before they get better. We’re still at the early stages of the data economy. We’re still learning what can be done with Big Data: how to gather it, how to analyze it, and how to use it to specifically target us. The appetite for our data from companies will only increase, and the capabilities to collect, transmit, and use that data will only expand.
This is the future, and it may mean things we have a hard time accepting. Peter Singer, a security guru, put it bluntly in another Wired article: “Both militaries and the public need to come to grips with the fact that the era of secrets is arguably over.”
I start with this: data about me should be mine. If I allow someone else to collect it, I should get clear benefits from it, and should know how else it is to be used by that someone. I should have control over with whom else it is shared. And if it is monetized, I should share in that.
We may be past that. We may, indeed, be past the era of secrets, and into the world where all of our data flows constantly. Data is out of the privacy barn, and it isn’t coming back in. All we can do is to try to best corral it.
Follow Kim on Medium and on Twitter (@kimbbellard)!