I'm Matthew Setter. I'm a security researcher, privacy advocate, software engineer, and tech writer, who loves teaching people all that I know.
Can You Still Trust Facebook With Your Online Privacy and Data?
In light of the recent Cambridge Analytica / Facebook scandal Mark Zuckerberg testified before the US Congress. Did you tune in to hear what he said about what they knew? Were you keen to know more and about how they might be planning to protect your privacy in the future?
I was certainly keen to hear what Mark Zuckerberg had to say before the U.S. Congress. With over 2 billion users, Facebook is the largest social media platform in the world; the actions that they take are important, and have wide-ranging consequences.
I have to be honest though, while I wasn't surprised by what I heard, I still came away feeling somewhat disappointed. And as I listened to the testimony, I became a little concerned.
Mark hardly gave a straight answer to any of the questions, and his few conclusive responses were lost, I felt, in many half answers, and broad, somewhat innocuous statements.
However, Let's Put It In Greater Context
To be fair to Mark — despite his power and influence within Facebook, the tech community, and the wider business world — the rules are completely different when you're facing the US congress. If the rules are anything like they are in Australia, you don't have much power.
I'm not entirely familiar with the legal situation in the United States. However, some colleagues have advised me on a recent post on my Facebook page, that he does have to be quite careful with what he says, as it may be used against him far into the future.
Whereas for the politicians, they have quite a lot of liberty to make accusations and assertions, even some that are incredibly specious at best. Compounding that problem, I noted several others. The first is that a number of them didn't seem all that apprised of how Facebook or social media platforms work.
The next is that some didn't come across as all that tech savvy to begin with. Moreover, the third is that each only had about 4 - 5 minutes to ask questions. That's barely enough time to form a proper question, while also providing sufficient background context, along with additional meaning if required.
So the entire environment seems imperfect all round. Regardless, that was the state of play. Given all of these, and no doubt many more factors, it's entirely understandable that Mark Zuckerberg would be careful about what he divulged at the hearing.
Add to that the amount of heat that's on the company at the moment, and I'd suggest that it's only logical to expect that this outing is more about saving the company's reputation and limiting the fall out more than anything else.
When you put all of that together, what we heard seemed to be only a set of well thought out and rehearsed talking points, ones designed to create the impression that the company:
- Is genuinely sorry for what has happened and for the mistakes that they made
- That they will do all that they can to do better in the future — ideally avoiding it happening again.
And let's be honest, most companies would likely operate similarly, if they were in the same position. So, similar to the banking royal commission in Australia, it doesn't make sense to either pre-empt anything or to attempt to rush change prematurely.
As I've never worked at Facebook, nor at the upper levels of the company, nor know anyone who does or has, there's no way that I can know anything about the discussions and goings on there regarding the platform, their attitude, and their data protection and privacy policies.
Despite all this, I've been feeling uneasy about what I heard. Yet feelings are notoriously unreliable things, so I decided to do a little bit of digging, to see what else I could find out about Facebook, and how it works. Hopefully, by understanding the company in a more significant context, it could answer some of the lingering questions that I had.
In this post, I want to share some of that research with you and discuss several questions that I have. I'd love to hear your thoughts about them in the comments below.
Why Didn't They Notify 87 Million Users That Their Personal Data Was Breached?
According to Politico, Facebook first learned about the breach of up to 87 million users' data around 2015. In addition to that, Facebook made a conscious decision not to inform users about the breach.
Zuckerberg also said he didn't remember when Facebook made the decision not to inform users that their data had been breached and that he didn't remember being involved in the conversation where the decision was made.
What's more, when the scandal publicly broke, instead of providing a public update they said nothing for a further week. Now let's consider another point. According to Politico, when Facebook learned of the breach, they approached Cambridge Analytica and asked them to remove the breached data.
What they did not do was an audit to ensure that the information was removed. Apparently, they accepted Cambridge Analytica's word that it was addressed.
Now let's consider Mark's response when asked about the situation. During his testimony to the US Congress, he said:
We didn't take a broad enough view of our responsibility, and that was a big mistake," he added. “It was my mistake, and I'm sorry. I started Facebook, I run it, and I'm responsible for what happens here. It will take some time to work through all of the changes we need to make," he added, “but I'm committed to getting it right.
Sounds good, doesn't it, but words aren't the same thing as actions. Stop for a moment and consider all of these points individually, and then collectively. Consider how you'd feel — and what you'd do — if your bank, credit reporting agency, or insurance company had such a large proportion of their accounts breached and said nothing.
How would you feel if they both knew about it, but made a conscious decision not to inform anyone, then, when it all came to light they went quiet for another week. What would that do to your perception of them?
The information that banks, credit reporting agencies, and insurance companies, among others, is different to what Facebook holds. This is still private data, entrusted by users to Facebook. So while it may appear that the comparison's ill-suited, there's a measure of comparison which can be afforded.
I don't want to single out Facebook's handling of the breach, nor their delay in acknowledging it as being unique. Of all the high profile cases of recent years, we only have to look at Yahoo!
While they publicly acknowledged in 2016 that up to 1 billion accounts had been compromised, that breach dated back as far as 2012 or 2013.
We later found out that not only had up to 1 billion accounts been breached but that all Yahoo! accounts had. On top of that, we could look at a variety of other companies, including Ashley Madison, AdultFriendFinder, Telegram, and BitFinex.
Then there's the case of the Commonwealth Bank of Australia (CBA), Australia's biggest bank, losing the records of almost 20 million accounts. According to ZDNet:
In a statement, CBA said the data included customer names, addresses, account numbers and 16 years of transaction information used to print customer account statements (dating from 2000 to early 2016). CBA said it informed Australia's Privacy Commissioner when it became aware of the breach in May 2016, but "a decision was made not to alert customers". ...While CBA continues to monitor accounts and reassures customers that the tapes were "most likely disposed of," the concerning fact remains -- we may never know."
From these two examples, you can see that Facebook's hardly unique here. I want to think that, with the relationship that they've worked to build up over the years, that they'd have handled the situation better, but they're not alone.
Why Fight Regulation Designed to Improve Users' Privacy?
According to The Guardian, Facebook recently shifted the responsibility of more than 1.5 billion users (all Facebook users outside of the US and Canada) to their Californian offices.
That's significant, because the new EU privacy regulation framework, the GDPR, is set to come into effect on May 25.
So they've moved more than 1.5 billion users from a jurisdiction where privacy regulations are set to become much stronger, to one where they're weaker? If you were really committed to user privacy and “getting it right", why make that move?
Let's consider some potential reasons.
- Given the scale and reach of their platform (over 2 billion users around the world), are they not ready yet — despite having since 2014 to begin preparing for it?
- Are they not ready for fines of up to 4% of global turnover (or around $1.6bn), if they're found to be in violation of the GDPR?
- Is the cost of moving the users less than the cost of one or several violations?
- Is it easier and less costly to comply with one regulatory framework, rather than several simultaneously?
- Is it easier to consolidate all their users under one regulatory framework, until a newer, foreign framework has been fully established and is more widely understood? Until the GDPR is in effect and better understood — and tested — it's likely a sane choice to consolidate, wait, and see.
Any one of these or several of them are rational, logical, and pragmatic.So while the timing could be justifiably seen as conspicuous, it needn't be.
However, according to Fortune, on April 19, 2018, the move may have to do with Facebook's re-introduction of facial recognition for European users and the fact that it may be illegal under the GDPR. According to the article:
The Irish DPC has jurisdiction over Facebook because its international services all run out of its Irish headquarters—all users outside North America currently agree to the terms of services coming out of that office.
Now consider that in tandem with the fact that:
The Irish DPC is querying the technology around facial recognition and whether Facebook needs to scan all faces (i.e., those without consent as well) to use the facial recognition technology. The issue of compliance of this feature with GDPR is therefore not settled at this point.
It would seem, given the timing of the move and the outstanding nature of the technology concerning the GDPR, that the decision to move the users away from the Irish DPC's jurisdiction has more to do with being able to re-introduce facial recognition in Europe than any of the other potential reasons.
What do you think?
Why Fight the Strongest Privacy Legislation in the US?
This legislation is called the Biometric Information Privacy Act (BIPA), which requires explicit consent before companies can collect biometric data like fingerprints or facial recognition profiles. If they were genuine in wanting to do better would it not be logical to:
- Take advantage of the introduction of the legislation to show just how genuinely committed they are
- Ensure that they were prepared for it as soon as possible and that new user controls were as transparent and straightforward as possible?
Unfortunately, according to The Verge, they've been quietly lobbying for an amendment to the proposed legislation which:
Allows companies to collect biometric data without notice or consent as long as it's handled with the same protections as other sensitive data. Companies could also be exempted if they do not sell or otherwise profit from the data, or if it is used only for employment purposes.
To be more specific, according to the post, Facebook isn't actively or directly lobbying for the watering down of the law. However, it notes that they're an active member of the Illinois Chamber of Commerce's Tech Council, which is.
Whereas the move of over 1.5 billion users could have any number of pragmatic reasons for being carried out, this one seems to have less room for doubt. However, let's look deeper.
Facial recognition is a reasonably new area of modern technology. Given that, the implications of being able to scan, store, and track people is far less understood, than something such as fingerprints or metadata.
Moreover, like anything new, it's been used by corporations and organisations long before people are fully aware of it happening, the consequences and implications of it have become modestly known, and any legal framework has had a chance to keep pace.
As a result, it's logical to expect three things:
- That incumbents would have invested significantly in the technology
- That it would be contributing to their bottom lines
- They would not want their investment or earnings encroached upon, as The Washington Post reported on back in June of 2015.
However, on this matter, I agree with a statement by a group of privacy advocates, including the Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU), which The Daily Dot reported on back in 2015:
At a bare minimum, people should be able to walk down a public street without fear that companies they've never heard of are tracking their every movement—and identifying them by name—using facial recognition technology. Unfortunately, we have been unable to obtain agreement even with that basic, specific premise.
Now some of the referenced articles are a few years old, and deliberately so. I've done that to consider a before and after perspective. Now let's look at what has changed at Facebook in recent years.
Recently, they re-introduced facial recognition for European users, after it was removed because of privacy concerns 5 - 6 years ago. This might sound concerning if you're against it, but according to Facebook spokesperson Rochelle Nadhiri, the feature is off by default.
However, Wired reports that it's not quite that cut and dry:
The new setting respects people's existing choices, so if you've already turned off tag suggestions, then your new face recognition setting will be off by default. If your tag suggestions setting was set to 'friends', then your face recognition setting will be set to on.
So, it's not a clear opt-in or a yes or no situation. If you're not aware of the implications of tagging, then it's reasonable to expect that you'd be confused to find that facial recognition was already enabled when you'd not explicitly chosen to opt-in.
Moreover, with a feature as pervasive as this could become, or already may be, this needs to have a clear-cut choice to opt in or out.
Now, while this may seem concerning, at least Facebook, while not completely clear with their notifications about the introduction of facial recognition, are evident that they're rolling it out again.
This stands in stark contrast to Westfield shopping centres (Australia). According to a report in October, 2017:
Every time you walk into a Westfield shopping centre you are being tracked. Sophisticated facial detection software, working in conjunction with the devices you are carrying on you, are monitoring your mood and tracking your every move. The company uses small cameras fixed atop advertising screens, that detect individual faces in order to record the age, gender and mood of shoppers. Westfield also tracks shoppers' movements by pinging their Wi-Fi enabled devices with routers littered across its centres.
The report went on further to say that specific consent isn't required as "the Privacy Act 1988 only regulated the collection of personal and sensitive information".
At least Facebook let's you know what they're doing, even if you have to do some digging and be somewhat savvy to appreciate the full ramifications of the choices that you're provided with.
Besides, if you want to opt out at any time, a little googling is all it takes to find out how — even if the process is somewhat convoluted.
Why Give $200,000 to Help Fight Against Increased Privacy Controls?
Now let's consider a third political lobbying effort. According to NBC News, back in February of this year, Facebook gave $200,000 to The Committee to Protect California Jobs. Why's this significant? Because the committee is:
A business-backed political action group dedicated to defeating a state ballot initiative that would expand Californians' privacy controls.
If Facebook were keen to do better, why actively support a group that's actively lobbying to avoid improving privacy controls for users? Would you not put support behind already existing groups that were working to improve privacy?
To be fair, the article goes on to report that Facebook has since pulled support from the committee, because:
We took this step in order to focus our efforts on supporting reasonable privacy measures in California.
However, they still gave the money earlier this year, not years ago. As a nod to them though, they have stated that they would adopt the recommendations of the Honest Ads Act, which seeks to regulate online political advertising.
Next, I want you to think about two points that an article in Der Spiegel (German) shared recently:
- We put our online data there ourselves.
- We love to vicariously enjoy the intimate details of the lives of others (e.g., royals, celebrities, and politicians, etc.), but we don't like it when the details of our own lives are shared.
While Facebook does use, and has the power to abuse our data, we do, willingly, hand a lot of it to them in the first place. Now let's dive down the rabbit hole just a little bit further. Let's say you willingly give Facebook (or any other social network or organisation) your data.
Then, let's say you give "friends of friends" access to your data so that your friends' friends can see and comment on your data, or your friends can share your thoughts and posts with their friends. And when all that's in place, let's say that you have a friend who then grants an advertiser access to that data.
How should Facebook handle it? Consider that you have expressly given that friend access to that data and then that friend has explicitly given access to that data to the advertiser. You could say that, technically, Facebook is just doing what you have expressly allowed them to do with your data.
This is what bothers me about how social media platforms work and how our lack of understanding about them can lead to situations that we genuinely don't expect or appreciate.
Given the change of events that I just described, it's reasonable for Facebook to assert that you allowed them to do what they did. At the same time, it's reasonable to expect that the average user may not fully appreciate what they're consenting to.
Chew over that for a moment or two.
That's a Wrap
There's a lot to consider around the use of our personal data, how social media platforms (and other companies) collect, analyse, and user our data, and what that means for our privacy in the modern age, and how trusting we should be with our online privacy.
And this article, even if it were twice or three times as long, couldn't hope to discuss it, nor Facebook's motivations or actions, in sufficient detail.
However, I hope that by pulling more threads together, you are in a more informed position, and able to make a more informed choice about how you approach using Facebook and other sites in the future.
Are you going to close your Facebook account? Are you going to continue as before? Alternatively, are you going to take a more considered view as to how you use it, what you share, like, and comment on?
Have you reviewed your Facebook privacy settings, and locked them down as much as you can? If so, what settings did you apply first?
Don’t miss my next post. Drop your email in the box below, and get it straight to your inbox, PLUS exclusive content only available by email. No spam, and you can unsubscribe at any time.