Unsealed docs in Facebook privacy suit offer glimpse of missing app audit

Share
  • September 16, 2022

It’s not the crime, it’s the cover up… The scandal-hit company formerly known as Facebook has fought for over four years to keep a lid on the gory details of a third party app audit that its founder and CEO Mark Zuckerberg personally pledged would be carried out, back in 2018, as he sought to buy time to purge the spreading reputational stain after revelations about data misuse went viral at the peak of the Cambridge Analytica privacy crisis.

But some details are emerging nonetheless — extracted like blood from a stone via a tortuous, multi-year process of litigation-triggered legal discovery.

A couple of documents filed by plaintiffs in privacy user profiling litigation in California, which were unsealed yesterday, offer details on a handful of apps Facebook audited and internal reports on what it found.

The revelations provide a glimpse into the privacy-free zone Facebook was presiding over when a “sketchy” data company helped itself to millions of users’ data, the vast majority of whom did not know their info had been harvested for voter-targeting experiments.

Two well-known companies identified in the documents as having had apps audited by Facebook as part of its third party sweep — which is referred to in the documents as ADI, aka “App Developer Investigation” — are Zynga (a games maker); and Yahoo (a media and tech firm which is also the parent entity of TechCrunch).

Both firms produced apps for Facebook’s platform which, per the filings, appeared to have extensive access to users’ friends’ data, suggesting they would have been able to acquire data on far more Facebook users than had downloaded the apps themselves — including some potentially sensitive information.

Scraping Facebook friends data — via a ‘friends permissions’ data access route that Facebook’s developer platform provided — was also of course the route through which the disgraced data company Cambridge Analytica acquired information on tens of millions of Facebook users without the vast majority knowing or consenting vs the hundreds of thousands who downloaded the personality quiz app which was used as the route of entry into Facebook’s people farm.

“One ADI document reveals that the top 500 apps developed by Zynga — which had developed at least 44,000 apps on Facebook — could have accessed the ‘photos, videos, about me, activities, education history, events, groups, interests, likes, notes, relationship details, religion/politics, status, work history, and all content from user-administered groups’ for the friends of 200 million users,” the plaintiffs write. “A separate ADI memorandum discloses that ‘Zynga shares social network ID and other personal information with third parties, including advertisers’.”

“An ADI memo concerning Yahoo, impacting up to 123 million users and specifically noting its whitelisted status, revealed that Yahoo was acquiring information ‘deem[ed] sensitive due to the potential for providing insights into preferences and behavior’,” they write in another filing. “It was also ‘possible that the [Yahoo] App accessed more sensitive user or friends’ data than can be detected.’”

Other examples cited in the documents include a number of apps created by developer called AppBank, which made quiz apps, virtual-gifting apps, and social gaming apps — and which Facebook’s audit found to have access to permissions (including friends permissions) that it said “likely” fall outside the use case of the app and/or with there being “no apparent use case” for the app to have such permissions.

Another app called Sync.Me, which operated from before 2010 until at least 2018, was reported to have had access to more than 9M users’ friends’ locations, photos, websites, and work histories; and more than 8M users’ read_stream information (meaning they could access the users’ entire newsfeed regardless of privacy settings applied to to different newsfeed entries) per the audit — also with such permissions reported to be out of scope for the use case of the app.

While an app called Social Video Downloader, which was on Facebook’s platform from around 2011 through at least 2018, was reported to be able to access more than 8M users’ “friends’ likes, photos, videos, and profile information” — data collection which Facebook’s internal investigation suggested “may speak to an ulterior motive by the developer”. The company also concluded the app likely “committed serious violations of privacy” — further observing that “the potential affected population and the amount of sensitive data at risk are both very high”.

Apps made by a developer called Microstrategy were also found to have collected “vast quantities of highly sensitive user and friends permissions”.

As the plaintiffs argue for sanctions to be imposed on Facebook, they attempt to calculate a theoretical maximum for the number of people whose data could have been exposed by just four of the aforementioned apps via the friends permission route — using 322 friends per user as a measure for their exercise and ending up with a figure of 74 billion people (i.e. many multiples greater than the human population of the entire planet) — an exercise they say is intended “simply to show that that number is huge”.

“And because it is huge, it is highly likely that most everyone who used Facebook at the same time as just these few apps had their information exposed without a use case,” they go on to argue — further noting that the ADI “came to similar conclusions about hundreds of other apps and developers”.

Let that sink in.

(The plaintiffs also note they still can’t be sure whether Facebook has provided all the information they’ve asked for re: the app audit — with their filing attacking the company’s statements on this as “consistently proven false”, and further noting “it remains unclear whether Facebook has yet complied with the orders”. So a full picture still does not appear to have surfaced.)

App audit? What app audit?

The full findings of Facebook’s internal app audit have never been made public by the tech giant — which rebooted its corporate identity as Meta last year in a bid to pivot beyond years of accumulated brand toxicity.

In the early days of its crisis PR response to the unfolding data horrors, Facebook claimed to have suspended around 200 apps pending further probes. But after that early bit of news, voluntary updates on Zuckerberg’s March 2018 pledge to audit “all” third party apps with access to “large amounts of user info” before a change to permissions on its platform in 2014 — and a parallel commitment to “conduct a full audit of any app with suspicious activity — dried up.

Facebook comms simply went dark on the audit — ignoring journalist questions about how the process was going and when it would be publishing results.

While there was high level interest from lawmakers when the scandal broke, Zuckerberg only had to field relatively basic questions — leaning heavily on his pledge of a fulsome audit and telling an April 2018 hearing of the House Energy and Commerce Committee that the company was auditing “tens of thousands” of apps, for example, which sure made the audit sound like a big deal.

The announcement of the app audit helped Facebook sidestep discussion and closer scrutiny of what kind of data flows it was looking at and why it had allowed all this sensitive access to people’s information to be going on under its nose for years while simultaneously telling users their privacy was safe on its platform, ‘locked down’ by a policy claim that stated (wrongly) that their data could not be accessed without their permission.

The tech giant even secured the silence of the UK’s data protection watchdog — which, via its investigation of Cambridge Analytica’s UK base, hit Facebook with a £500k sanction in October 2018 for breaching local data protection laws — but after appealing the penalty and, as part of a 2019 settlement in which it agreed to pay up but did not admit liability, Facebook got the Information Commission’s Office to sign a gag order which the sitting commissioner told parliamentarians, in 2021, prevented it from responding to questions about the app audit in a public committee hearing.

So Facebook has succeeded in keeping democratic scrutiny of its app audit closed down. 

Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

Also in 2019, the tech giant paid the FTC $5BN to buy its leadership team what one dissenting commissioner referred to as “blanket immunity” for their role in Cambridge Analytics.

While, only last month, it moved to settle the California privacy litigation which has unearthed these ADI revelations (how much it’s paying to settle isn’t clear).

After years of the suit being bogged down by Facebook’s “foot-dragging” over discovery, as the plaintiffs tell it, Zuckerberg, and former COO Sheryl Sandberg, were finally due to give 11 hours of testimony this month — following a deposition. But then the settlement intervened.

So Facebook’s determination to shield senior execs from probing questions linked to Cambridge Analytica remains undimmed.

The tech giant’s May 2018 newsroom update about the app audit — which appears to contain the sole official ‘progress’ report in four+ years — has just one piece of “related news” in a widget at the bottom of the post. This links to an unrelated report in which Meta attempts to justify shutting down independent research into political ads and misinformation on its platform which was being undertaken by academics at New York University last year — claiming it’s acting out of concern for user privacy.

It’s a brazen attempt by Meta to repurpose and extend the blame-shifting tactics it’s successfully deployed around the Cambridge Analytica scandal — by claiming the data misuse was the fault of a single ‘rogue actor’ breaching its platform policies — hence it’s trying to reposition itself as a user privacy champion (lol!) and weaponizing that self-appointed guardianship as an excuse to banish independent scrutiny of its ads platform by closing down academic research. How convenient!

That specific self-serving, anti-transparency move against NYU earned Meta a(nother) rebuke from lawmakers.

More rebukes may be coming. And — potentially more privacy sanctions, as the unsealed documents provide some other eyebrow-raising details that should be of interest to privacy regulators in Europe and the US.

Questions about data retention and access

Notably, the unsealed documents offer some details related to how Facebook stores user data — or rather pools it into a giant data lake — which raises questions about how or even whether it is able to correctly map and apply controls once people’s information is ingested so that it can, for example, properly reflect individuals’ privacy choices (as may be legally required under laws like the EU’s GDPR or California’s CCPA). 

We’ve had a glimpse of these revelations before — via a leaked internal document obtained by Motherboard/Vice earlier this year. But the unsealed documents offer a slightly different view as it appears that Facebook, via the multi-year legal discovery wrangling linked to this privacy suit, was actually able to fish some data linked to named individuals out of its vast storage lake.

The internal data warehousing infrastructure is referred to in the documents as “Hive” — an infrastructure which is said “maintains and facilitates the querying of data about users, apps, advertisers, and near-countless other types of information, in tables and partitions”.

The backstory here is the plaintiffs sought data on named individuals stored in Hive during discovery. But they write that Facebook spent years claiming there was no way for it “to run a centralized search for” data that could be associated with individuals (aka Named Plaintiffs) “across millions of data sets” — additionally claiming at one point that “compiling the remaining information would take more than one year of work and would require coordination across dozens of Facebook teams and hundreds of Facebook employees” — and generally arguing that information Facebook provided by the user-accessible ‘Download Your Information’ tool was the only data the company could provide vis-a-vis individual users (or, in this case, in response to discovery requests for information on Named Plaintiffs).

Yet the plaintiffs subsequently learned — via a deposition in June — that Facebook had data from 137 Hive tables preserved under a litigation hold for the case, at least some of which contained Named Plaintiffs data. Additionally they discovered that 66 of the 137 tables that had been preserved contained what Facebook referred to as “user identifiers”.

So the implication here is that Facebook failed to provide information it should have provided in response to a legal discovery request for data on Named Plaintiffs.

Plus of course other implications flow from that… about all the data Facebook is holding (on to) vs what it may legally be able to hold.

“For two years before that deposition, Facebook stonewalled all efforts to discuss the existence of Named Plaintiffs’ data beyond the information disclosed in the Download Your Information (DYI) tool, insisting that to even search for Named Plaintiffs’ data would be impossibly burdensome,” the plaintiffs write, citing a number of examples where the company claimed it would require unreasonably large feats of engineering to identify all the information they sought — and going on to note that it was not until they were able to take “the long-delayed sworn testimony of a corporate designee that the truth came out” (i.e. that Facebook had identified Hive data linked to the Named Plaintiffs but had just kept it quiet for as long as possible).

“Whether Facebook will be required to produce the data it preserved from 137 Hive tables is presently being discussed,” they further observe. “Over the last two days, the parties each identified 250 Hive tables to be searched for data that can be associated with the Named Plaintiffs. The issue of what specific data from those (or other) tables will be produced remains unresolved.”

They also write that “even now, Facebook has not explained how it identified these tables in particular and its designee was unable to testify on the issue” — so the question of how exactly Facebook retrieved this data, and the extent of its ability to retrieve user-specific data from its Hive lake more generally, is not clear.

A footnote in the filing expands on Facebook’s argument against provided Hive data to the plaintiffs — saying the company “consistently took the position that Hive did not contain any relevant material because third parties are not given access to it”.

Yet the same note records that Facebook’s corporate deponent recently (and repeatedly) testified “that Hive contain logs that show every ad a user has seen” — data which the plaintiffs confirm Facebook has still not produced.

Every ad a user has seen sure sounds like user-linked data. It would also certainly be, at least under EU law, classed as personal data. So if Facebook is holding such data on European users it would need a legal basis for the processing and would also need to be able to provide data if users ask to review it, or request it deleted (and so on, under GDPR data access rights).

But it’s not clear whether Facebook has ever provided users with such access to everything about them that washes up in its lake.

Given how hard Facebook fought to deny legal discovery on the Hive data-set for this ligation it suggests it’s unlikely to have made any such disclosures to user data access requests elsewhere.

Gaps in the narrative

There’s more too! An internal Facebook tool — called “Switchboard” — is also referenced in the documents.

This is said to be able to take snapshots of information which, the plaintiffs also eventually discovered, contained Named Plaintiffs’ data that was not contained in data surfaced via the (basic) DYI tool.

Plus, per Facebook’s designee’s deposition testimony, Facebook “regularly produces Switchboard snapshots, not DYI files, in response to law enforcement subpoenas for information about specific Facebook users”.

So, er, the gap between what Facebook tells users it knows about them (via DYI) and the much vaster volumes of profiling data it acquires and stores in Hive — which can, at least some of the time per these filings, be linked to individuals (and some of which Facebook may provide in response to law enforcement requests on users) — keeps getting bigger.

Facebook’s DYI tool, meanwhile, has long been criticized as providing only a trivial slice of the data it processes on and about users — with the company electing to evade wider data access requirements by applying an overly narrow definition of user data (i.e. as stuff users themselves actively uploaded). And those making so-called Subject Access Requests (SARs), under EU data law, have — for years — found Facebook frustrating expectations as the data they get back is far more limited than what they’ve been asking for. (Yet EU law is clear that personal data is a broad church concept that absolutely includes inferences.) 

If Hive contains every ad a user has seen, why not every link they ever clicked on? Every profile they’ve ever searched for? Every IP they’ve logged on from? Every third party website containing they’ve ever visited that contains a Facebook pixel or cookie or social plug, and so on, and on… (At this point it also pays to recall the data minimization principle baked into EU law — a fundamental principle of the GDPR that states you should only collect and process personal that is “necessary” for the purpose it’s being processed for. And ‘every ad you’ve ever viewed’ sure sounds like a textbook definition of unnecessary data collection to this reporter.)

The unsealed documents in the California lawsuit relate to motions seeking sanctions against Meta’s conduct — including towards legal discovery itself, as the plaintiffs accuse the company of making numerous misrepresentations, reckless or knowing, in order to delay/thwart full discovery related to the app audit — arguing its actions amount to “bad-faith litigation conduct”.

They also press for Facebook to be found to have breached a contractual clause in the Data Use Policy it presented to users between 2011 and 2015 — which stated that: “If an application asks permission from someone else to access your information, the application will be allowed to use that information only in connection with the person that gave the permission and no one else” — arguing they have established a presumption that Facebook breached that contractual provision “as to all Facebook users”.

“This sanction is justified by what ADI-related documents demonstrate,” the plaintiffs argue in one of the filings. “Facebook did not limit applications’ use of friend data accessed through the users of the apps. Instead, Facebook permitted apps to access friend information without any ‘use case’ — i.e., without a realistic use of ‘that information only in connection with’ the app user.”

“In some cases, the app developers were suspected of selling user information collected via friend permissions, which obviously is not a use of data ‘only in connection with the person that gave the permission and no one else’,” they go on. “Moreover, the documents demonstrate that the violations of the contractual term were so pervasive that it is near certain they affected every single Facebook user.”

This is important because, as mentioned before, a core plank of Facebook’s defence against the Cambridge Analytica scandal when it broke was to claim it was the work of a rogue actor — a lone developer on its platform who had, unbeknownst to the company, violated policies it claimed protected people’s data and safeguarded their privacy.

Yet the glimpse into the results of Facebook’s app audit suggests many more apps were similarly helping themselves to user data via the friends permissions route Facebook provided — and, in at least some of these cases, these were whitelisted apps which the company itself must have approved so those at least were data flows Facebook should absolutely have been fully aware of.

The man Facebook sought to paint as the rogue actor on its platform — professor Aleksandr Kogan, who signed a contract with Cambridge Analytica to extract Facebook user data on its behalf by leveraging his existing developer account on its platform — essentially pointed all this out in 2018, when he accused Facebook of not having valid developer policy because it simply did not apply the policy it claimed to have. (Or: “The reality is Facebook’s policy is unlikely to be their policy,” as he put it to a UK parliamentary committee at the time.)

Facebook’s own app audit appears to have reached much the same conclusion — judging by the glimpse we can spy in these unsealed documents. Is it any wonder we haven’t seen a full report from Facebook itself?

The reference to “some cases” where app developers were suspected of selling user information collected via friend permissions is another highly awkward reveal for Facebook — which has been known to roll out a boilerplate line that it ‘never sells user information’ — spreading a little distractingly reassuring gloss to imply its business has strong privacy hygiene.

Of course it’s pure deflection — since Meta monetizes its products by selling access to its users’ attention via its ad targeting tools it can claim disinterest in selling their data — but the revelation in these documents that some of the app developers that Facebook had allowed on its platform back in the day might have been doing exactly that (selling user data), after they’d made use of Facebook’s developer tools and data access permissions to extract intel on millions (or even billions) of Facebook users, cuts very close to the bone.

It suggests senior leadership at Facebook was — at best — just a few steps removed from actual trading of Facebook user data, having encouraged a data free-for-all that was made possible exactly because the platform they built to be systematically hostile to user privacy internally was also structured as a vast data takeout opportunity for the thousands of outside developers Zuckerberg invited in soon after he’d pronounced privacy over — as he rolled up his sleeves for growth.

The same CEO is still at the helm of Meta — inside a rebranded corporate mask which was prefigured, in 2019, by a roadmap swerve that saw him claim to be ‘pivoting to privacy‘. But if Facebook already went so all in on opening access to user data, as the plaintiffs’ suit contends, where else was left for Zuckerberg to truck to to prepare his next trick?

Leaked Facebook ads document raises fresh questions over GDPR enforcement

Unsealed docs in Facebook privacy suit offer glimpse of missing app audit by Natasha Lomas originally published on TechCrunch

Source : Unsealed docs in Facebook privacy suit offer glimpse of missing app audit