The right platforms for influence?
To Substack or not to Substack is a frequent question. If you do not have a huge following and the resources to manage a newsletter full-time- the answer is clear.
I was inspired to write this post by an interaction I witnessed on BlueSky between Colette Delawalla (Founder and Executive Director of Stand Up for Science) and others. Many people voiced concern about Colette’s decision to start a Substack Blog ( Science Fight Club , which I subscribe to) at the beginning of 2026. While I share many of the concerns on content moderation on Substack, I struggle with this critique. There is a distinct difference between someone trying to build a movement from scratch—who benefits from large platform infrastructure to reach people—and established voices with large readerships and teams who have the resources to thrive without mainstream apps. For many newer creators, Substack is an appropriate choice to maximize reach and leverage existing infrastructure to grow an audience. Criticism about ethics in media outlets is welcome, but the choice for disseminating a newsletter on Substack should not be shocking in 2026. Section 230 (the act which relieves platforms from liability for the speech on their platforms) should be the main target of criticism. Based on current legislation, platforms are not responsible for the speech on them. Companies are not incentivized to moderate their platforms. Instead, they are interested in reaching as many people as possible to get more engagement, subscriptions, and payments.
A sentiment that I think is admirable right now, not everyone has to be a subscriber or Substack reader, but it clearly is a communication tool that can matter.
As Colette noted in the below BlueSky post, she had launched the newsletter on a platform without an integrated network and had fewer than 100 subscribers over the course of 5 months. At this time, the Science Fight Club has 298 subscribers and is #4 “Rising in Science” after being live for 3 days. That is impressive growth, and the amount of money going to Substack’s bottom line in my view is inconsequential compared to the hopefully positive good that Colette could achieve through the platform. To provide a salient example on dollars going to Substack, let’s say that 50 of the current subscribers (conservative estimate) are paying the $5 subscription and Substack takes 10% fees from subscriptions. That is $250 of gross monthly revenue and $25 of that is going to Substack
There are 3 actions I see for people who are considering supporting someone who chooses to write on Substack but are concerned about the platform’s moderation. 1) You can subscribe for free and get direct emails to your inbox through the platform (and inherently never have to deal with push notifications from the app or targeted emails). 2) You can subscribe and choose to pay creators directly for content and maybe they get enough resources to leave and/or have more impact outside of an email newsletter (this is one way to pledge your support, but as many people note there is no way for that money to not transact through that platform). 3) Finally, you can choose to not support the mission at all. It is within everyone’s right to choose not to support a mission based on platform. As scientists we all have limited time and our primary jobs are not communicators. We also have limited time and energy to disseminate our work. Because of that, our choices for where we spread our work matter.
I do not use Twitter/X to promote my science. In my view, sharing science on that platform is counterproductive for both the authors and audience. Even more frighteningly, the in-house AI product in X (Grok) was explicitly developed to mirror a specific ideology that I disagree with. The AI product built into X had a newsworthy moment when it called itself “MechaHitler”. Since AI products respond based on their training data and an array of curated instructions, this is a clear warning signal about the desired culture on that platform. Now take the example of Substack posts and publications centering Nazism and other forms of dehumanizing hate-speech. People thinking that any group are a social underclass and should be killed is terrible. People feeling comfortable enough with these views to the point that they can build a monetizable following is also terrible. A society that allows this is problematic and I vehemently disagree with the views espoused in those posts/newsletters. I raise these two points (on Substack vs X) because they center the same terrible speech pattern. In the case of Grok, xAI is training the model to speak in a way that is ‘anti-woke’—embedding a particular ideology into how the algorithm interprets and presents information. The promotion of views that have a central message of hate speech and dehumanization is sad and harmful.
In both examples, the central difference is intent. One group specifically created a piece of technology that consistently demonstrates a problematic ideology. The other promoted harmful content on their app and doubled down a stance against content moderation (which is an action protected by Section 230). Corrective actions behind the intent are different in my view. With Grok, the ideology is not a byproduct of lax moderation but rather an intentional design choice—xAI leadership has explicitly trained the model to be “anti-woke,” making the AI itself a vehicle for a particular worldview. This is fundamentally different from Substack, where problematic content exists because the platform allows broad speech (even speech that I view as hate speech) but does not actively engineer that content into its core product. The chatbot Grok is integral to X’s user experience in a way that a newsletter you never subscribe to simply is not. Also, the risk that your audience is susceptible to harm are dramatically different. On Twitter/X (as well as BlueSky, Threads, etc.) you can read 100s of posts in a short period of time. Curating your feed is also far more difficult, which is why I think the company’s stance on content moderation is even more important. Whenever I sign up for a new app, I unsubscribe from all promotional emails and push notifications because all apps are trying their hardest to command more of my attention. Turning off notifications and only seeking out creators you respect (ideally outside of the platform) is a lot simpler to self-moderate compared to algorithm-based social media optimized towards engagement enragement.
I am on Substack because people I respect use it as their platform of choice for disseminating their newsletters. I subscribe to several creators because they have subscriber only content I would like to pay for (namely Hops With Pop, because I am a die-hard Chargers fan and I trust his opinions on the Chargers given the level of access he has as the Athletic’s team reporter).
I struggle a bit with having science-focused subscriber-only content on Substack. If you are offering a course or access to something novel, then maybe you should sell it on this platform if that is the best means of distribution. In that case I could understand harsh criticism maybe if your mission is so compromised by the lack of moderation stance of the Substack team. If giving your personal dollar to Substack is a moral quandary, not pledging money to the platform is a good idea. Perhaps you can only subscribe to the people that you wish to see and open their emails in your email instead of on the Substack app itself (while unsubscribing to all the marketing emails from them alongside their push notifications). Finally, perhaps someone making the choice to post their newsletter on a platform which has allowed bad ideas to be present should make you walk away from an organization entirely. This is also within your rights, and I respect your choice even though that is not my personal path. Although Section 230 is your main adversary here, not an individual organization or creator. I would suggest that more energy is spent on structural change criticizing Section 230 instead of criticizing scientists trying to ensure their field survives.
I am of the view that deplatforming individuals in the current internet landscape does not take away their microphone but rather emboldens them in less visible platforms. Research supports this concern: after Parler was deplatformed following the January 6th Capitol attack, a study found overall activity on fringe social media did not decrease as individuals migrated to Gab, Rumble, and Telegram. Similarly, during the height of the COVID-19 pandemic, deplatforming conspiracy theorists from Facebook had limited long-term impact. The barrier to entry for secondary accounts is low and bad-actors can be diversified across alternative platforms where counter-narratives became even harder to deliver. Walking away from good people who are doing good work because they are on Substack does hurt that creator, and maybe they will learn from it. At the same time the people who are in our opposition continue to grow power and capitalize on an engagement-based media environment. Again, I think there are levels to harmful platforms. Just this week, Grok was caught allowing users to digitally undress women and minors from photos posted on the platform through an image-editing feature that displayed altered images right under the original post. xAI admitted to ‘lapses in safeguards’ as governments in India and France opened investigations.
As a population health scientist, I witnessed firsthand the unraveling of our public health infrastructure. I believe a mistake of the time was attempts to deplatform ideas (even though those ideas are inherently dangerous, and often explicitly disproven by empirical research). Those ideas need to be combatted with better communication strategies. Deplatforming simply makes identifying and dismantling antagonists harder, and fringe ideas can become even more enticing to others. Open up Tik Tok, Instagram, YouTube, you are bombarded with ads and misinformation. This can happen by seeking out misinformation, or even leaving a video on auto-play. I believe Substack subscribers are technologically savvy enough to work with a foreign platform and manage the road bumps that come with it their own content moderation. As people gain widespread notoriety for their work, it likely is both more financially lucrative and they have more control if they move away from Substack. In the interim, growing an audience and reaching more people who can easily support a mission is imperative.
For my own blog, it is free because I do not want to take any money for a personal blog. This gives me a bit of freedom in terms of the content I produce and ensures it stays a hobby (for now).



