Traditional civil society concerns around online freedom of expression have long centered on the tension between state-imposed restrictions and the defense of human rights. But the growing role of private service providers in shaping and enforcing regulatory mechanisms points to a deeper shift in internet governance: one in which the responsibilities of public policy are increasingly being transferred from states to private intermediaries (Hintz, 2016; DeNardis & Hackl, 2015). This shift is transforming how we understand and exercise fundamental rights in digital spaces—and demands our urgent attention.
Online speech has expanded the expressive possibilities available to citizens. Not only is there a broader spectrum of user-generated content, but it’s easier than ever to find and engage with information that matters to us (Thomson, 2012). Yet the forms of expression that flourish online don’t come with the same legal guarantees we associate with offline speech. No matter how socially relevant or publicly engaged our digital interactions may be, social media platforms remain privately owned spaces. In Yochai Benkler’s words, the internet is a “public sphere built on private infrastructure” (2006). And while some governments do own or control parts of the network, the vast majority of it is in private hands. That means our relationships with platforms are governed not by human rights frameworks, but by private law: platform owners retain the right to exclude, manage, and limit how others use their property (Thomson, 2012).
In practice, this means our online speech is no longer protected by the national or international legal standards designed to safeguard fundamental rights. Instead, it’s governed by each provider’s terms of service and content policies—rules that determine what can or can’t be said on a given platform (Leetaru, 2016). This model of governance carries echoes of the self-regulation practices used in the print media world (Tambini & Leonardi, 2008). But in the realm of social media, users are expected to give up fundamental rights in exchange for services that are often described as “free” (MacKinnon, 2013).
Social Networks and Shopping Malls
Social media allows people to participate in public debate in ways that often feel democratic, but it does so through mechanisms owned and controlled by private actors. This shift, from public to private spaces, has been compared to the replacement of public squares and streets with shopping malls: environments where public life continues, but under commercial logic (Hintz, 2016). If social media platforms are now effectively serving the same function as public plazas —metaphorically, at least— then should they not also carry some form of responsibility?
Brian Pellot (2013) argues yes. He suggests that the companies running these platforms act as channels for public discourse and, in that role, cannot be considered purely private spaces.
A similar logic was used by the U.S. Supreme Court in Marsh v. Alabama (1946), which extended freedom of expression protections to shopping malls under the principle of “functional equivalence.” The argument was that shopping malls had, in practice, replaced public squares as civic gathering spaces and should therefore allow the same kinds of political and protest activity, even if privately owned. But that doctrine was later reversed. The prevailing position now acknowledges that public discourse can happen in both public and private spaces, but insists that while governments are required to guarantee public forums for free expression, private actors are not held to the same standard (Chiodelli & Moroni, 2015).
Still, one more analogy is worth drawing between shopping malls and social media. In 2006, anthropologist Setha Low observed that post–9/11 New York had begun closing public spaces under the pretense of redesign and then reopening them under systems of intense surveillance. This shift, she argued, was an early sign of the privatization of public space —where surveillance practices undermine the relationship between people and the spaces they inhabit. Today, targeted surveillance has been replaced by the continuous collection and processing of data from all areas of human activity (Hintz, 2016). Social media platforms are central to this system: they gather enormous volumes of personal data and routinely share it with third parties. This includes both advertisers (as part of their profit model) and governments (in response to formal requests) (DeNardis & Hackl, 2015). More recently, this also includes training datasets for generative AI, biometric harvesting, and predictive analytics that reach far beyond traditional user tracking.
The More, the Better
Because user data collection is at the heart of social media’s business model, it’s critical for these platforms to make us feel free while we use them (Cagle, 2015). That sense of freedom facilitates what appears to be voluntary consent to the collection and monetization of our personal data (DeNardis & Hackl, 2015). We give this consent when we accept each platform’s terms of service —agreements that define what behaviors and content are allowed, and which function as the de facto code of conduct for participation in these so-called “communities” (Pellot, 2013).
The real value of these platforms isn’t in their features or technical innovations, but in their user base —in how many people are there (Thomson, 2012). As Johnson (2016) explains, platforms like Facebook operate on an “aggregational theory” of free expression, where the focus is on hosting as many voices as possible. To sustain that scale, they must convince users that their content policies can simultaneously enable expression and prevent harm. As a result, content moderation isn’t guided by legal standards, but by market pressures: platforms remove content when they fear losing a critical mass of users (Johnson, 2016).
So while these platforms may appear “neutral” in a technical sense, in reality, their administrators make judgment calls every day about what content is allowed to stay, and those decisions follow market logic. Popular, profitable speech is prioritized, while unpopular expression (even when legally protected) is more likely to be suppressed (Thomson, 2012).
This also explains the stark contrast in acceptable content across platforms: Facebook and Instagram apply stricter rules, while Reddit and 4chan tolerate —and sometimes celebrate— content that would be considered destructive elsewhere. Communities form around different expectations, and what counts as harmful in one space might be welcomed in another (Chandrasekharan et al., 2017). For users seeking an audience —and for platforms trying to retain users— it’s crucial to understand the cultural and economic logics that shape each digital environment. Since the original writing of this article (back in 2017), platform dynamics have shifted significantly —Twitter, now rebranded as X under Elon Musk, has become a volatile space, and Meta's consolidation of Facebook, Instagram, and WhatsApp continues to shape content moderation through cross-platform integration. Emerging platforms and federated alternatives, like Mastodon or Bluesky, illustrate user desire for decentralization, though scale remains a limiting factor.
Thomson (2012) refers to this dynamic as “lock-in.” Even though users technically agree to the platform’s terms and can leave whenever they want, in practice, most people lack the financial, technical, or social capital to migrate. Some platforms have become so dominant that there’s no meaningful alternative. Leaving is possible, but the inconvenience and social cost reduce a user’s visibility and impact to such a degree that staying often feels like the only viable option.
To make matters worse, content policies are often in flux, subject to pressure from users, governments, and markets (Hintz, 2016). This creates an unstable environment in which users can never be entirely sure whether their content is compliant. Companies, in turn, tend to overcorrect: afraid of backlash, they preemptively censor more content than the law requires (Pellot, 2013). This has led to notoriously conservative moderation practices —Facebook, for instance, has been accused of censoring images of fat bodies, breastfeeding, menstruation, and even breast cancer awareness campaigns (York, 2016). In addition to content removal, platforms increasingly rely on algorithmic downranking or shadowbanning —techniques that suppress reach without triggering formal censorship protocols. This tactic has been particularly visible in the suppression of politically sensitive hashtags and activist content around global protests.
Corporate Social Responsibility
While it’s clear that a company’s primary responsibility is to its shareholders (Pellot, 2013), some scholars —like Thomson (2012) and Johnson (2016)— frame the moderation decisions made by social media intermediaries through the lens of corporate social responsibility (CSR).
CSR, in this context, refers to a company’s attempt —often led by its public relations team— to at least give the appearance that:
1. It is aware of the social consequences of its business practices, and
2. It is taking active steps to minimize harm and maximize benefit (Johnson, 2016).
Platform intermediaries responsible for moderating user-generated content need to be able to point to their pre-established policies when justifying moderation decisions. These policies are designed to “reinforce the platform’s positive attributes while removing the negative ones.” In this sense, their ability to maintain homeostasis within the network —curating and managing the information environment— is a central part of the value they provide to users.
Yoo (2009) argues that because users can’t possibly process the sheer volume of information online, the platform’s real value lies in how it selects and presents that content. This makes its role more than just technical: it becomes editorial, not unlike a traditional media outlet.
It’s true that today’s platforms function like walled gardens, carefully curated to keep users within their borders and discourage wandering into the chaotic “jungle” of the open internet (Leetaru, 2016). But comparing them too closely to traditional media can be misleading. On social media, users also shape what they see, and while that might seem like a positive feature, it often reinforces echo chambers and preexisting biases (Thomson, 2012). These gardens now exist within the broader conversation around the 'splinternet' and regulatory efforts to mandate interoperability among dominant platforms, especially under policies like the EU’s Digital Markets Act.
Most platforms tailor content feeds based on user preferences and behavior, even without the user’s direct input. The result is a form of content manipulation that profoundly affects what users are exposed to (Hintz, 2016). This curation influences how we receive news, associate information, access knowledge, and engage in public debate: functions that are central to the health of democracy (DeNardis & Hackl, 2015).
In recent years, content governance has also become increasingly automated. AI-driven moderation systems —opaque in both training data and decision-making processes— are now responsible for a growing share of takedowns, visibility scoring, and content ranking. These systems introduce their own sets of bias and opacity, while further distancing accountability from users.
Self-regulation
The U.S. model tends to view self-regulation as the safer alternative —mainly because it keeps the power to moderate content out of the hands of the state (Tambini, Leonardi & Marsden, 2008). Yoo (2009) argues that it’s better to let audiences choose between different platforms and moderation styles than to hand control over speech to government regulators, which could pose greater risks to free expression.
But as we’ve already seen, that choice isn’t truly free. Users do not enter into platform agreements on equal footing. Their ability to challenge or appeal moderation decisions is, at best, extremely limited (Leetaru, 2016).
So what becomes clear is this: Self-regulation, by itself, isn’t just insufficient —it’s dangerous. Letting the “invisible hand” of the market determine which ideas are permitted is deeply problematic, particularly for human rights and for democracy itself, which requires the broadest possible range of expression.
While it may sound ideal to demand that platforms protect user expression, the current legal frameworks make such protection largely meaningless (Thomson, 2012). In the past, civil society campaigns and public petitions have succeeded in pressuring platforms to change their content policies (Hintz, 2016). But let’s be honest: this, too, is a form of market pressure. Popular outrage can shape policy. Unpopular voices, however —especially those with little reach or influence— rarely get the traction needed to spark change.
That’s why some scholars now advocate for a co-regulation model, in which:
- Platform decisions would be reviewable by courts
- Terms of service would be measured against substantive human rights standards, not just the fine print of a private contract (Tambini, Leonardi & Marsden, 2008)
At the same time, we —the users— might want to reflect on the negotiation we’ve entered into. We’ve given up significant amounts of personal data and individual autonomy in exchange for the services these platforms provide. We’ve done this in pursuit of community, of connection, of relevance.
It’s true that media access has long been shaped by private control (Thomson, 2012). And the internet, in theory, was meant to change that —to bypass the gatekeepers and put power directly into people’s hands. But even now, our fundamental freedoms are still governed by private actors. And in the years to come, how we relate to the information we produce and consume will play a defining role, not only in how we see ourselves and shape our lives, but in how we act as citizens, together.
Bibliography
Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom. Yale University Press.
Chandrasekharan, E., Pavalanathan, U., Srinivasan, A., Glynn, A., Eisenstein, J., y Gilbert, E. (2017). You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech. Proc. ACM Hum.-Comput. Interact. 1, 2, Article 31 (November 2017), 22 páginas.
Cagle, S. (2015). “No, you don’t have free speech online”. 10 de junio de 2015, Pacific Standard. : https://psmag.com/environment/chuck-johnson-is-a-massive-baby-who-doesnt-know-how-to-read-service-agreements
Chiodelli, F., y Moroni, S. (2015). Do malls contribute to the privatisation of public space and the erosion of the public sphere? Reconsidering the role of shopping centres. City, Culture and Society, 6(1), 35-42.
DeNardis, L., y Hackl, A. M. (2015). Internet governance by social media platforms. Telecommunications Policy, 39(9), 761-770.
Hintz, A. (2016). Restricting digital sites of dissent: commercial social media and free expression. Critical Discourse Studies, 13(3), 325-340.
Johnson, B. J. (2016). Facebook's Free Speech Balancing Act: Corporate Social Responsibility and Norms of Online Discourse. U. Balt. J. Media L. & Ethics, 5, 19.
Leetaru, K. (2016). Has Social Media Killed Free Speech? Forbes, Oct. 31, 2016. : https://www.forbes.com/sites/kalevleetaru/2016/10/31/has-social-media-killed-free-speech/#227d2ff146b1
Low, S. M. (2006). The erosion of public space and the public realm: paranoia, surveillance and privatization in New York City. City & Society, 18(1), 43-49.
MacKinnon, R. (2013). Consent of the networked: The worldwide struggle for Internet freedom. Basic Books (AZ).
Pellot, B. (2013). Private lives, public space. Open Democracy, 13 Marzo 2013. https://www.opendemocracy.net/brian-pellot/private-lives-public-space
Tambini, D. y Leonardi, D. y Marsden, C. (2008) The privatisation of censorship: self regulation and freedom of expression. In: Tambini, Damian and Leonardi, Danilo and Marsden, Chris, Codifying cyberspace: communications self-regulation in the age of internet convergence. Routledge / UCL Press, Abingdon, UK., pp 269-289. ISBN 9781844721443
The Economist, “Lost in the splinternet”, 5 de noviembre de 2016. : https://www.economist.com/news/international/21709531-left-unchecked-growing-maze-barriers-internet-will-damage-economies-and
Thomson, S. (2012). Protecting Legitimate Speech Online: Does the Net work?. LLB (Hons) Dissertation, University of Otago.
Yoo, C. S. (2009). Free Speech and the Myth of the Internet as an Unintermediated Experience. Geo. Wash. L. Rev., 78, 697.
York, J. (2006). A complete guide to all the things Facebook censors hate most. Quarz, 29 de junio de 2016. https://qz.com/719905/a-complete-guide-to-all-the-things-facebook-censors-hate-most/