Data Digest № 014

Data Digest ¦ August 4th, 2019, 11:00 pm

Welcome to the 14th edition of the Data Digest, which offers a weekly summary of the most important things that happened in the data industry. This week in review, Senator Feinstein’s “Voter Privacy Act”, China’s social credit system, cyberattacks on Capital One, Google’s listening tactics, the Censorship Act and more. Enjoy!

Preventing The Next Cambridge Analytica

En route to the 2020 election, the Cambridge Analytica scandal looms large and several Senators are looking into drafting regulation that could aid in preventing these types of hacks and misuse. An interesting proposition was delivered by Senator Dianne Feinstein who said that “[p]olitical candidates and campaigns shouldn’t be able to use private data to manipulate and mislead voters. Today, campaigns are legally able to conduct sophisticated online surveillance of everyone in our country in order to influence individuals based on their unique psychological characteristics.” In order to combat the nonconsensual use of our data for political purposes, Feinstein introduced the “Voter Privacy Act” which would provide voters five fundamental rights:

  1. Right of access. Voters would be permitted to review any of their own personal information collected by a campaign, candidate or political organization.

  2. Right of notice. Any campaign that receives an individual’s personal information from a data broker (including consumer purchasing history, geolocation, medical information, credit reports, web browsing data and other information) would be required to notify those individuals that their data was obtained.

  3. Right of deletion. Voters would be permitted to instruct a campaign, candidate or political organization to delete their personal information.

  4. Right to prohibit transfer. Voters would be permitted to instruct a campaign, candidate or political organization not to sell their data to a third party.

  5. Right to prohibit targeting. Voters would be permitted to instruct websites like Google and Facebook not to use their data profiles to help political groups target them with psychologically engineered political ads.

Removing politics from the arena of surveillance capitalism is a laudable goal and an important step forward for data privacy and data ownership. As with any other legislation, the devil truly is in the details. While the Voter Privacy Act defines these details much more thoroughly than other draft legislation, i.e. the recently introduced DASHBOARD Act, there are still very clear shortcomings that could prove disastrous for an effective implementation. For instance, data that is not subject to the Voter Privacy Act is “deidentified information.” As we’ve seen in the study from Imperial College London that we covered in last week’s Data Digest, it takes only 15 data attributes from an anonymized data set to re-identify a person. Other examples of lacking precision include the format requirements put forward in the Voter Privacy Act, according to which data reports must be delivered in ‘concise, and easily accessible form, using clear and plain language.” The term “easily accessible form” under GDPR is interpreted, for instance, by Facebook as a JSON file which no person with at least a somewhat advanced understanding of computer science will be able to deal with. Another big issue is that the Voter Privacy Act prohibits voters to elect third parties to issue the opt-out requests on their behalf. Putting the burden on the voters will tremendously decrease the efficacy of the legislation. In the case of Cambridge Analytica, all 80 million US citizens would have needed to personally issue a request to Cambridge Analytica (or the PACS it represented). But the biggest question of it all: why should we apply the proposed rules only in the arena of politics? Why not roll it out across all industries for all US citizens?

A new bill aims to protect US voters from the next Cambridge Analytica

As the 2020 campaign season accelerates, a US lawmaker introduced a bill on Thursday that would regulate how political parties use voters’ data in federal elections. Voter privacy: Democratic senator Dianne Feinstein said the bill, the Voter Privacy Act, is the first to directly respond to Cambridge Analytica, which used Facebook to harvest the data…

The Perils of Stigmatizing China’s Social Credit System

The Chinese government deadline for establishing the laws and regulations governing social credit is just over a year away. While the system, has been portrayed in Western media as an Orwellian machinery designed to lead the Chinese populace into a dystopian surveillance state, Chinese legal researchers say it’s far from the West’s Big Brother portrayal. According to Jeremy Daum, a senior research fellow at Yale Law School’s Paul Tsai China Center in Beijing, the system as it exists today is more a patchwork of regional pilots and experimental projects, with few indications about what could be implemented at a national scale. The fact that the system is not yet finalized in its details is certainly a reminder that we should not jump to foregone conclusions and assess what initiatives are in fact real, and which ones aren’t. But in that spirit, we should keep in mind that in China each person’s cell phone number and online activity is assigned a unique ID number tied to their real name, China ranks 177 out of 180 countries in the 2019 World Press Freedom Index, and ranks 65 out of 65 in the Freedom on the Net 2017 report by independent democracy watchdog, Freedom House. While Wired’s article goes at length to detail why concerns about the Chinese system are overblown, the argumentation that we are not to worry since no statewide policies have officially been adopted, yet, is a bit shaky in the light of the small sample of aforementioned realities. What is important, however, is to remember how easily the Chinese credit system can be used as a downward social comparison to make lesser surveillance much more palatable.

“Because China is often held up as the extreme of one end of the spectrum, I think that it moves the goalposts for the whole conversation. So that anything less invasive than our imagined version of social credit seems sort of acceptable, because at least we’re not as bad as China.”

With recent debates about breaking up Big Tech in the US, the main argument delivered by FANG (Facebook, Amazon, Netflix, Google) has been that it would place the US at a competitive disadvantage to China in the light of the ‘collectivist’ data practices employed by their government and big tech. This certainly makes assessing how accurately we depict China’s Social Credit system very important, indeed, as the one’s supposed Black Mirror reality may easily assuage us in the West into a real Black Mirror reality that goes far beyond what has been implemented in the system that we were so intent to stay away from.

How the West Got China's Social Credit System Wrong

It occupies a spot next to 'Black Mirror' and Big Brother in popular imagination, but China’s social credit project is far more complicated than a single, all-powerful numerical score.

Capital One gets hacked

In 2019 alone, 3,494 successful cyberattacks against financial institutions have been reported, according to the Treasury Department’s Financial Crimes Enforcement Network. On Monday, Federal law enforcement officials said that Paige Thompson, a software engineer in Seattle who used to work for Amazon, hacked Capital One’s computer network through a “configuration vulnerability” in its security software and obtained the personal data of tens of millions of customers. According to court documents, she was able to download an array of personal material from customers, including credit card applications and Social Security numbers. This deep level of threat to the most integral institutions in our society showcases the vulnerabilities we are forced to endure through negligent cybersecurity.

Here are the guidelines to determine if your information had been accessed as well as instructions on how to shore up account security.

  1. Capital One will notify affected individuals through “a variety of channels” and offer free credit monitoring and identity protection available to all affected.

  2. Enroll in account text and/or email alerts to help keep track of activity.

  3. Monitor credit card accounts for unusual or suspicious activity.

  4. Call the number on the back of the credit card if unusual activity is observed.

  5. Stay vigilant about the possibility of phishing emails and calls following the breach. Phishing is a malicious attempt to access personal information or bank accounts by posing as a legitimate company or official.

  6. Capital One is not calling customers to ask for credit card or account information or Social Security numbers over the phone or via email.

  7. Report emails suspected of phishing activity by forwarding it to the official Capital One security account, Do not reply to suspicious emails and delete them after forwarding them to Capital One.

Capital One Breach Shows a Bank Hacker Needs Just One Gap to Wreak Havoc (Published 2019)

An engineer in Seattle was charged with stealing the information of millions of customers from Capital One. The incident was surprisingly common.

Tracking Your Every Like

Website owners could face legal implications over Facebook’s “Like” buttons. The European Union Court of Justice ruled on Monday that site owners could be held accountable for transmitting data to Facebook without consent from the user. Whether this ruling will have any real effect or not is yet to be seen. However, it would mean that sites must obtain consent from users before sending data to Facebook, unless they can demonstrate a “legitimate interest” in doing otherwise. Presently, the data gets sent to Facebook before users are given the chance to opt-out. Imagining a world where one has to permission their data before liking a photo seems like a pretty complicated solution. Giving users control over their data in the first place would negate these problems entirely.

Sites could be liable for helping Facebook secretly track your web browsing, says EU court

Users should consent to being tracked with Like buttons

Google Will Not Listen in on Your Home Pod (for three months)

The data protection commissioner for Germany announced yesterday that the country was investigating Google’s AI powered assistant after reports that contractors had been listening to the audio recorded by the devices. Products similar to Google Home have been ‘accidentally’ listening to conversations inside people’s homes, and contractors found themselves listening in. In their statement, the German commission pointed out its growing privacy concerns, stating that “the use of automatic speech assistants from providers such as Google, Apple and Amazon is proving to be highly risky for the privacy of those affected.” Recent reports suggest Apple and Amazon workers also listen to recordings to improve Siri and Alexa. German regulators are summoning other speech assistant providers, including Apple and Amazon, to “swiftly review” their policies.

In an unexpectedly speedy response, Apple announced last night that it would temporarily suspend its practice of using human contractors to grade snippets of recordings. This comes a week or so after a report in the Guardian detailing a contractors report of regularly hearing confidential medical information, drug deals, and recordings of couples having sex, as part providing quality control for Siri. Google said it will stop the practice of listening to and transcribing recordings for at least three months across the European Union, as the regulator looks into the issue.

Google will pause listening to EU voice recordings while regulators investigate

Germany’s data protection commissioner is investigating

Big Tech and Censorship

Last week saw the introduction of another bill targeting Big Tech and their ability to moderate speech on their platforms. The ‘Stop The Censorship Act’ sponsored by Rep. Paul Gosar (R-AZ), proposes to strike language in Section 230 of the Communications Decency Act that allows platforms to moderate content they deem as “objectionable.” Instead, Gosar argues, new language should supplant the existing text allowing users to facilitate “the option for a self-imposed safe space, or unfettered free speech, whichever the user chooses.” Gosar argues the language in Section 230 enables platforms like Facebook and Twitter to censor free speech. The proposal comes at a time when the debate about big tech’s role in society and democracy has emerged as a bipartisan focal point, with both parties attacking Silicon Valley’s poster children from vastly different vantage points. While Democrats see platforms such as Facebook and Google as monopolists extracting valuable consumer surplus and being utilized by third parties as a vessel to undermine the integrity of the US electoral process, Republicans argue that articles like Section 230 empower Big Tech to censor their content under the auspices of consumer protection while actually being motivated by biased political beliefs and endangering the foundation of free speech.

Big Tech’s liability shield under fire yet again from Republicans

The Stop the Censorship Act is, uhhh, a thing that exists

What I'm Reading:

A VxWorks Operating System Bug Exposes 200 Million Critical Devices

VxWorks is designed as a secure, "real-time" operating system for continuously functioning devices, like medical equipment, elevator controllers, or satellite modems.

Banks Adopt Military-Style Tactics to Fight Cybercrime (Published 2018)

Financial institutions are using military tools and techniques, like “fusion centers” and combat drills, to battle cybercrime.

Deleting your Siri voice recordings from Apple’s servers is confusing — here’s how

It’s way too hard



Get the Data Digest in your inbox