Ben Bartosik

May 8, 2026

“The companies insisted on their right to use facial-recognition systems to identify a stranger on the street without first obtaining the individual’s consent. As one lobbyist in the talks told the press, ‘Everyone has the right to take photographs in public… if someone wants to apply facial recognition, should they really need to get consent in advance?’”

April 25, 2026

Reading an interesting section in the Age of Surveillance Capitalism (Zuboff) on Google's cycle of dispossession, which is essentially how they normalize taking away our choices of privacy in every space they enter. It focuses on Google Street View and how it began with a celebration of public space while simultaneously making the argument that we have a different expectation of privacy when we use them.

This is a tricky bit of work they're doing here; essentially trying to equate the eyes on the street notion of public life with the use of surveillance tech. But these are very different things, especially in the hands of a company like Google. Jacobs' eyes on the street is about establishing a shared sense of ownership for public space. When a place is used by people, it creates a sort of social safety net in which strangers are tied together by an unspoken common goal. They are the eyes of the collective stranger—the community—whose gaze is not salacious but watchful.

The use of surveillance tech in the public realm is something that has bothered me for a while. It's driven not by a shared responsibility of protecting the common good but the individual desire to protect private interests. One builds communal trust, the other tears it down.

Google's attempt to claim access to the public realm is similarly compelled by their private interests. These eyes on the street are not watchful, nor are they communal. They are the exploitative gaze of a company who harvests our behaviours for their own benefit, indifferent to what makes a shared space good.

April 14, 2026

“In contrast, Google’s inventions destroyed the reciprocities of its original social contract with users… Instead of deepening the unity of supply and demand with its populations, Google chose to reinvent its business around the burgeoning demand of advertisers eager to squeeze and scrape online behaviour by any means available in the competition for market advantage. In the new operation, users were no longer ends in themselves but rather became the means to others’ ends.” (Zuboff, Surveillance Capitalism)

Zuboff spends a lot of time in the first few chapters of the book looking at Google as the pioneers of surveillance capitalism. One of the more interesting parts is the way the founders of Google initially rejected advertising. They were focused on building the best search technology. The behavioural data was only used to make their Search better, an exchange that users were willing to make. However, the dot-com financial crisis put a lot of pressure on them from their investors to figure out how to become profitable—and fast—so they pivoted and discovered how valuable that behavioural data was when being used in ways not for the benefit of the users. In four years they went from making no profit to $3.2 billion. By 2017 they were one of the top two companies in he world.

Again, Zuboff's point here is that the use of the technology in this way was not inevitable. It was result of deliberate choices made by specific people at specific times.

April 5, 2026

Started reading Shoshana Zuboff's book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, this week. It's a topic I've been interested in for a while and saw this recommended recently.

Surveillance capitalism is essentially the way companies mine our data in order to predict, guide, and exploit our behaviours for profit. As Zuboff notes, it is practice that is so commonplace now we barely even think about it. It's also a practice that has increasingly become normalized in our offline spaces as well (the way we track our health and exercise data as an example).

Zuboff's early argument—and one I agree with—is that we effectively have no choice in this. It is everywhere and a part of everything. Its design is to feel both normal and invisible, so as not to draw our attention. And we willingly surrender to it in exchange for various conveniences and securities. As Zuboff puts it, "its normalization leaves us singing in our chains."

One key point that Zuboff makes that I want to highlight here is that much of these exploitative practices hide behind the argument of inevitability. The creators and enablers of them want us to believe that they are the inevitable outcome of the technology, this is just the price we pay for modern society. However, these practices are far from inevitable, and are instead "meticulously calculated and lavishly funded."

Resisting them begins with resisting the claim of inevitability. And understanding that we have the power to design a better way.

April 4, 2026

"Surveillance capitalism runs contrary to the early digital dream... Instead, it strips away the illusion that the networked form has some kind of indigenous moral content, that being ‘connected’ is somehow intrinsically pro-social, innately inclusive, or naturally tending toward the democratization of knowledge. Digital connection is now a means to others’ commercial ends. At its core, surveillance capitalism is parasitic and self-referential. It revives Karl Marx’s old image of capitalism as a vampire that feeds on labour, but with an unexpected turn. Instead of labour, surveillance capitalism feeds on every aspect of every human’s experience.” (Shoshana Zuboff, The Age of Surveillance Capitalism)

NewerOlder