Disclaimer: This is a user generated content submitted by a member of the WriteUpCafe Community. The views and writings here reflect that of the author and not of WriteUpCafe. If you have any complaints regarding this post kindly report it to us.

You may not be aware that the rough consensus in the tech community is that Cambridge Analytica were almost certainly bullshit artists. Oh, don’t get me wrong, what they tried to do, and/or claimed to do, was super shady and amoral and would have been ruinous to reasonable informed democracy if successful. But I have yet to find anyone credible who thinks that their “psychographics” approach actually works.

That is: it doesn’t work yet.

We may yet thank Cambridge Analytica, for inadvertently raising the red flag before the real invaders are at the gate. Because in this era of ongoing exponential leaps and growths in data collection, which is also an era of revolutionary, transformative AI advancement, it’s probably only a matter of time before electorates can be subtly manipulated wholesale.

Don’t take my word for it. Take that of Google senior AI / deep-learning researcher François Chollet, in a truly remarkable Twitter thread:

The problem with Facebook is not *just* the loss of your privacy and the fact that it can be used as a totalitarian panopticon. The more worrying issue, in my opinion, is its use of digital information consumption as a psychological control vector. Time for a thread

— François Chollet (@fchollet) March 21, 2018

To quote the crux of what he’s saying:

In short, Facebook can simultaneously measure everything about us, and control the information we consume. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior … A loop in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see … A good chunk of the field of AI research (especially the bits that Facebook has been investing in) is about developing algorithms to solve such optimization problems as efficiently as possible

Does this sound excessively paranoid? Then let me point your attention to another eye-opening recent Twitter thread, in which Jeremy Ashkenas enumerates a set of Facebook’s more dystopian patent applications:

You know, I really hate to keep beating a downed zuckerberg, but to the extent that expensive patents indicate corporate intent and direction —

Come along for a ride, and let’s browse a few of Facebook’s recent U.S.P.T.O. patent applications…

— Jeremy Ashkenas (@jashkenas) April 4, 2018

Again let me just summarize the highlights:

a system that watches your eye movements […] record information about actions users perform on a third party system, including webpage viewing histories, advertisements that were engaged, purchases made […] “Data sets from trusted users are then used as a training set to train a machine learning model” […]“The system may monitor such actions on the online social network, on a third-party system, on other suitable systems, or any combination thereof” […] ‘An interface is then exposed for “Political analysts” or “marketing agencies“’ […]

Are those quotes out of context? They sure are! So I encourage you to explore the context. I think you’ll find that, as Ashkenas puts it, “these patent applications don’t necessarily mean that Facebook wants to use any of these techniques. Instead, they illustrate the kinds of possibilities that Facebook management imagines.”

An explosion of data. A revolution in AI, which uses data as its lifeblood. How could any tech executive not imagine using these dramatic developments in new and groundbreaking ways? I don’t want to just beat up on Facebook. They are an especially easy target, but they are not the only fish in this barrel:

l find it incomprehensible how Google-associated people still comment critically on Facebook's business practices when 84% of their revenue (and what pays for all the free services and research) comes from precisely the targeted advertising that's suddenly so contemptible. https://t.co/U8UbXXz6Df

— Antonio García Martínez (@antoniogm) March 25, 2018

Here’s yet another viral privacy-related Twitter thread, this time from Dylan Curran, illustrating just how much data Facebook and Google almost certainly have on you:

Want to freak yourself out? I'm gonna show just how much of your information the likes of Facebook and Google store about you without you even realising it

— Dylan Curran (@iamdylancurran) March 24, 2018

Mind you, it seems fair to say that Google takes the inherent dangers and implicit responsibility of all this data collection, and the services it provides with this data, far, far more seriously than Facebook does. Facebook’s approach to potential malfeasance has been … well … let me point you to still another Twitter thread, this one from former Google Distinguished Engineer Yonatan Zunger, who I think speaks for all of us here while reacting to reports of Facebook’s CTO saying “the company is now mapping out potential threats from bad actors before it launches products.”

The above tweet is much, much, less obscene than what I just said out loud.

*twitch*

— (((Yonatan Zunger))) (@yonatanzunger) April 5, 2018

But the larger point is that the problem is not restricted to Facebook, or Google, or the big tech companies. It’s more acute with them, since they have more data and more power, and, in Facebook’s case, very little apparent sense that with great power comes great responsibility.

How in holy hell does a project like this get far enough to actually have talks with hospitals about it?!? They wanted anonymized medical data to essentially deanonymize by hashing against profile data, but patient consent didn’t come up?? https://t.co/qqW4K2hoFM pic.twitter.com/fxT8IPAtoi

— Angela Bassa (@AngeBassa) April 6, 2018

But the real problem is far more fundamental. When your business model turns data into money, then you are, implicitly, engaging in surveillance capitalism.

Surveillance and privacy are issues not limited to businesses, of course; consider the facial recognition goggles already in use by Chinese police, or India’s colossal fingerprint-face-and-retina-driven Aadhaar program, or dogged attempts by the UK and US governments to use your phone or your Alexa as their surveillance device. But corporations currently seem to be the sharp and amoral edge of this particular wedge, and we have no real understanding of how to mitigate or eliminate the manifold and growing dangers of their surveillance capitalism.

I’m not saying all data gathering is ipso facto bad; but I am saying that, given the skyrocketing quantity of sensors and data in our world, and the ability to tie that data to individuals, any initiatives which support privacy, pseudonymity, and anonymity should be considered desirable until proven otherwise, given the ever-growing oceans of data whose tides threaten to wash those lonely islands away.

I’m certainly not saying AI is bad. Its potential for improving our world is immense. But like every powerful tool, we need to start thinking about how its potential misuses and side effects before we rush to use it at scale.

And I’m saying we should almost be grateful to Cambridge Analytica, for selling snake oil which claimed to do what tomorrow’s medicines actually will. Let’s not overreact with a massive moral panic. (checks Internet) Whoops, too late. OK, fine — but let’s try to take advantage of this sense of panic to calmly and rationaly forestall the real dangers, before they arrive.

Login

Welcome to WriteUpCafe Community

Join our community to engage with fellow bloggers and increase the visibility of your blog.
Join WriteUpCafe