Protecting Privacy in a Post-Privacy Society
While news of NSA’s PRISM and high-profile consumer data hacks have brought this issue into the public forum, privacy has been quietly dying by degrees for quite some time — enough so that announcements of a “post privacy society” are now met by hails of an “Age of Transparency.”
I don’t want to talk about government surveillance and criminal markets, though.
I want to talk about privacy and user experience (UX).
Specifically, the tangled tryst that Libraries, Privacy, and UX are locked into.
It goes like this: Libraries have an ethical commitment to protect patron privacy. This is built from, among other things, the premise that free intellectual inquiry cannot occur under conditions of surveillance.
User Experience has a deep commitment to the needs and desires of the user, whoever they may be, in the pursuit of making their lives easier/more productive/more delightful/etc.
Seems good so far.
But wait — as web analytic and big data tools progress, UX is finding that mining actual user behavior data (as well as personal data) can lead to insights that allow for better, more useful, more delightful designs.
In order to do that, though, some methods — well, they can get a little frisky with Privacy.
And Libraries are not having that.
Personally, I value privacy. I believe in limits to what corporations should know about us, not to mention governments.
But I’m no longer sure how much we live out this value.
It’s not so much in how people talk, as in how we act.
Every time we allow a new app access to our [X] account information (just your birthday/email/contact list/photos/DNA), for the convenience of single-sign in. Every time we ask our Echo/Google Home/other friendly-listening-robot to turn up the music/turn down the lights. Every time we use Google. Or Gmail. Or the Internet.
It doesn’t seem so much that people are unaware that we are trading pieces of our privacy — we may acknowledge feeling uneasy about it, and certainly no one wants any of these services to be hacked — but the convenience, the ease, the personalization are often too powerful lures to resist.
I see this in my own behavior — I block permissions for certain apps, and I don’t own a voice assistant device, but I also find it difficult to not use that fingerprint scanner on my phone (who wants to type in a PIN?).
So, with so many people seemingly slouching towards transparency, where does this leave libraries as champions of privacy?
And to what degree does our commitment to privacy impede our ability to develop the kinds of services (easy, useful, personalized) that users increasingly expect?
And when our users expect services to act in a certain way (built on the backs of their personal data), how will they react to library services being…different?
Can we achieve improved services via context-driven design, or data that isn’t tied to particular identities? Can we look at aggregate-level behavioral data instead, to identify key needs and challenges — while still maintaining a non-surveillance environment? Can we think about tools that explicitly both ask permission for relevant data and clearly signal how that data will be applied (and let users mess around with that to see what changes — sort of like adding/dropping filters?) Can we impose user-controlled expiration dates on all this data?
Maybe a privacy-protection niche will emerge, as a backlash against all of this relentless data-gathering. Maybe there will be an explicit demand for services and devices that purposefully do not collect/monitor/sell your data. (Er, maybe not as much of a demand yet as Silent Circle had hoped). Maybe some turn of events will increase the perceived value — or essential nature — of privacy in society’s eyes.
There’s some evidence for this, in growing concerns about privacy among the public — along with privacy-protection measures being adopted by younger users. (The idea that youth are unconcerned with digital privacy has largely been debunked). But there’s still a “privacy paradox” — in short, people’s purported views about privacy not aligning with their actual behavior.
While research has progressed on this, the highly personal and contextualized nature of privacy makes it challenging to investigate. Many answers related to privacy attitudes and practices could be summed up as, “it depends,” as each individual conducts a risk/benefit analysis to the situation. There’s even research suggesting that in some cases, people are more comfortable sharing their data with remote entities if it helps them shield their activity from the people around them (think of reading Fifty Shades of Grey on an e-book).
However, a recent Pew study details a troubling sense of hopelessness and inevitability regarding privacy as we know it (or knew it). There’s a swimming-against-the-tide sense that projects like the Privacy Paradox are picking up on, hoping to empower users to take back their data.
I may be pessimistic, but I worry about the forces of our habits, our biases, and our assumptions in all this.
Privacy is a fluid concept, with new lines being drawn for acceptable and unacceptable access, according to context. Broadly, I’d like to see a stronger “opt-in” standard and more transparent data-use practices will allow users the greater control of informed consent in their decision-making. Not to mention privacy policies that are actually readable and user-friendly. We need to push back, somehow.
Libraries have a role here — as champions, educators, and models — demanding better privacy practices from our vendors, educating our users about the dangers and strategies they can use to take back control, and modeling better practices (opt-in, transparency of data use, etc.)
In this changing milieu, we all need to find a new balance, even as librarians continue to advocate for the fundamental things that privacy supports — autonomy, free inquiry, and equity.