By: Annie Dubois
March 17th, 2022
This article is from our Spring 2022 Magazine Issue. Read the full magazine here.
Behaviors that seem random—choosing to eat Burger King for lunch, for example—are actually quite predictable for global tech companies. In fact, these companies can probably predict that someone will eat a Whopper for lunch before they even feel a pang of hunger. On the surface, someone’s lunch order is mundane information; however, for advertising companies, these predictions are worth gold.
The past two decades have seen a significant increase in the commodification of personal information, and collecting this information has only gotten easier thanks to the devices people use every day. Internet users went from innocently emailing in the early 2000s to having their dining decisions influenced by tech and advertising companies.
The exploitative collection of personal data for profit is known as surveillance capitalism. Best described by author of “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power,” Shoshanna Zuboff, in an interview for The Guardian, surveillance capitalism is “a rogue mutation of capitalism marked by concentrations of wealth, knowledge and power unprecedented in human history,” that allows surveillance capitalists to “sell certainty to business customers who would like to know with certainty what we do … to tune and herd and shape and push us in the direction that creates the highest probability of their business success.”
Surveillance capitalism leaves no stone unturned in its effort to maximize profit. Mobile phones, computers, web browsers and apps are the key to people’s most private and personal information. Google, for example, may have started as a convenient way to access information online, but people never predicted that Google would be searching them as well.
“One of the key goals of the aggregation of data is deanonymization, to understand who is in front of an ad, who is running a search, who is the person consuming information presented by a company,” said Claire Garvey, senior associate at the Center for Privacy and Technology at Georgetown Law School. “This is not just [about] our search history or the fact I might have a dog, but it’s about what pieces of information exist out there that create a generalized and highly specific picture about who I am and if I’m comfortable with that picture being created.”
Most information is tracked through app and web activity. Each user has a unique device ID, or an “identifier for advertisers” that allows third-parties to track user activity in apps and websites. This ID is cultivated in part through cookies, which are text files that save a user’s data in order to identify their device. This data can include a user’s login information and advertisements they interact with. The information collected from cookies is then rendered as data, analyzed and sold to businesses who want to influence people’s buying behaviors.
“If you think of targeted ads as a big part of surveillance capitalism, a lot of what it’s about is reducing uncertainty,” said Nick Feamster, professor of Computer Science and director of the Center for Data and Computing at the University of Chicago. “Nobody knows what the best model for reducing uncertainty is. The basic fallback plan is to just collect more data. No one knows which features produce the best models to reduce uncertainty, so they collect all of it and figure it out later. The answer is more data, and as we get more technology, it’s easier to get more and more data of more types.”
Users make this data collection easy as they amass colossal amounts of data every day. According to journalist and internet expert Trevor Wheelwright in an article for Reviews.org, the average U.S. smartphone user picks up their device 262 times per day. Whether that be to text a friend, scroll aimlessly through Instagram or search restaurants in the area, every tap and text is tracked.
Most of this tracking is unavoidable. Websites usually don’t allow users to view a webpage without accepting cookies. Even if someone doesn’t accept cookies or deletes all previous cookie tracking, other technologies like cookie syncing can still identify their computer.
Cookie syncing allows third-party trackers from different advertising companies to adopt data-buying agreements with each other to map user IDs across platforms. This means that even if a user doesn’t accept cookies from one website, another website they have accepted cookies from in the past can share their user ID if they have a data-buying agreement.
“The gist of it is, you can delete your cookies, but someone knows you’re back and they put the cookie right back in,” said Feamster. “Cookie deletion is theatre. Two sites can figure out that it’s you, and they can do something called cookie synching and basically link them together.”
Location tracking is another method used to extract and analyze personal information. In the example of going to Burger King for lunch, apps like Google Maps can gather enough location data to realize that the user’s workplace is around the corner from a Burger King. Previously collected data might also show that this person has visited the restaurant every Monday and Wednesday for the past month. It would be convenient, then, for an advertiser to promote a Whopper meal on this person’s Instagram page on Monday morning.
Sometimes, though, location-based tracking is an essential feature on an app. Strava, a GPS cycling and running app, allows users to share their running routes with others through the collection of GPS data points. The app would be pointless without this critical design aspect, but this feature backfired when the company released a heatmap in 2017 that visualized over 3 trillion individual data points. These data points were so thorough that they revealed patterns of military personnel on active duty and allowed U.S. army bases to be clearly identifiable.
Strava also uses anonymized GPS data to analyze cycling patterns and help city transportation departments improve street infrastructure. It’s hard to argue that improved cyclist and pedestrian safety is a bad thing, and location tracking makes these improvements possible.
Although monetization of personal data can sometimes be used for beneficial purposes, there’s still no way for users to audit their data to ensure it’s not being used invasively. These gray areas of surveillance capitalism only make it harder for consumers to create boundaries for their personal information. Contract laws that allow data brokers to sell personal information to advertisers do not include any clauses about individual rights. Although these contracts directly affect an individual’s ability to think and behave independently, individual users are not a legal entity and therefore are not protected in legal agreements. This ultimately makes the individual responsible for educating themselves on surveillance capitalism, often with no clear solution.
“[It’s important to] stop putting the onus on an individual to protect their privacy,” said Garvey. “An individual person doesn’t understand what these user agreements mean; they don’t understand what it means to accept or reject a cookie. Why is it up to the user to understand all of that and make a choice? Also, what is meaningful choice? Are we giving the end user meaningful choice, or are we essentially coerced by needing to use the internet? We have to opt into certain types of tracking and data collection in order to use the internet.”
To be online is to be tracked, and the extent to which people are being surveilled by their devices is intrusive and inescapable. Users have become comfortable trading privacy and security for convenience, not because they don’t care, but because the very nature of surveillance capitalism is to be opaque yet omnipresent.
Even armed with the awareness of surveillance capitalism, there is no clear path out on an individual level. Change must come at a large scale through tech regulations and the demolition of invasive frameworks. Until then, internet users can maintain healthy skepticism and reimagine what true agency can be in a world where agency is a part of the algorithm.
Annie Dubois is a fifth-year undergraduate student with a major in professional and public writing and a minor in digital humanities. She currently works as a communications intern for the College of Arts & Letters’ Marketing Office and for the Provost Office at MSU. She enjoys reading, cooking, boxing and biking around campus in her free time.