At Codeberg, we want to join the Fediverse, the ActivityPub-powered world that connects Mastodon, Pixelfed, flohmarkt and so many other cool social networks and tools.

Forgejo, the software that powers Codeberg, is actively working on making this possible. In our dreams, this will soon allow users on one server to collaborate with developers on another one, without central parties involved. You can receive feedback and communicate with people using their microblogging accounts. How cool?!

Our dreams have been disrupted with an enormous spam wave that hit the Fediverse recently. The open registrations of many small instances were abused to send spam to the rest of the network.1

Naturally, as admins who are also fighting spam, malware and abuse on a daily basis, we were curious and started thinking ahead: What if this spam reaches us in the future? How will we react to this?

I want to outline my perspective and takeaways from the situation. In comparisons, I'll often use Mastodon instead of other fediverse software. It is a tool where I have seen how moderation works in practice, and I suspect it is similar to most other major networks.

The caveat of federated moderation

Federation has many advantages, and is probably the reason why E-Mail is not dead yet. But it comes with costs and challenges. For example, it requires duplicating data across several servers, and keeping it in sync.

When you learn that your E-Mail account was hacked and sent spam to hundreds of recipients, there is no way to undo that action. The message is duplicated and so is the effort. If one user marks the message as spam, it is not removed from other people's inboxes.

Mastodon does a little better. Reporting a message as spam optionally informs the remote instance of the abuse, allowing them to get rid of the spam at the source. However, next to successful propagation of these changes (I have seen cases where I reported spam that was already removed on the remote instance), this requires an actively maintained and friendly remote instance. I'll come to the potential of malicious actors later.

If the remote instance is not willing to participate or does not respond in time, every affected instance needs to remove the spam on their own.

In the Fediverse, a pragmatic approach has emerged: When an instance is not properly moderated, it is simply blocked as a whole, also say "defederated". And lists of known unmoderated instances are shared between admins.

Back to E-Mail: There are blocklists in a similar spirit, but they resulted in a big case of monopoly and injustice. E-Mail was not designed with abuse prevention in mind, so the emerged solutions might differ. Still, we should find a better way than blocklists. And there are voices in the Fediverse complaining about the current practice, too.

Kinds of threats

People are only productive in safe environments. Most people cannot "just ignore" spam and abuse. If your inbox is flooded with annoying E-Mails, your motivation to read them at all decreases rapidly. If your social media feeds or chat rooms are filled with garbage, you don't enjoy interacting with the communities. The same pattern applies to software development: How could you build awesome content when distracted by all kinds of abuse. It is a serious threat for your productivity!

Next to spam, you can of course also feel uncomfortable due to disgusting content. At Codeberg, we regularly remove sexist, racist or otherwise inappropriate content to protect our community. And we will need to ensure there is no unhealthy inrush of this content from external sources.

More dangers are waiting: Code hosting platforms are among the number one platforms for spreading malware. Our current strategy seems to work out: We try to remove malware as soon as possible, often having it downloaded only a few times before being detected, and this seems to quickly turn away malicious actors from our platform. We see that not-actively-maintained platforms are full of bad code, and federating with them will pose a high risk, or require us to do the moderation work for the remote instances, too.

Last but not least, there is plain simple storage abuse. Sometimes with bad intent, sometimes just out of ignorance, people push Gigabytes of junk data, their encrypted home backups, replicate proprietary software and so much more. Depending on how forking and caching is done in a federated forge future, this can cause unexpected costs or problems (like your disk running full for your home server), and we must find a way to protect against this.

How a healthy federated network can look like

To me, this leaves a final idea of how federated systems should be designed. The key points:

Monopolies must be under control. Large instances have a great power and responsibility. If a large provider decides to cut the connection, it will be the smaller party to be blamed. This has been the case with E-Mail, where all small providers need to comply with the rules of Google, Microsoft etc. Users expect third-parties to successfully deliver the messages into their inbox and do not question if their provider plays a fair role in the game. We believe that community-managed structures like Codeberg also play an important role. The decision is in the hand of the public and does not depend on the arbitrary decisions of a company or small admin team.

Moderation must federate well. Instead of relying on only two sides of the connection to process reports, it might be beneficial if abuse reports are also shared to other instances. An instance could automatically hide posts if they have been blocked by numerous remote servers. This prevents malicious instances from generating duplicate work on all receiving servers.

Responsible maintenance. It is good to allow anyone to selfhost software. However, putting up something on the Internet comes with some responsibilities. Unmaintained systems can cause harm, unpatched software or insecure configurations can be the entrypoint for spam or can be combined to botnets with huge impact. Please consider twice before offering a service with open registrations. If you want, we recommend forming a group of like-minded admins and taking care together. This is more efficient – and more fun!

Build a network of trust. I believe that downloading blocklists from third-parties is not the ultimate answer, but knowledge sharing is. You almost always have some remote instances which you trust. I imagine Codeberg trusting instances maintained by team members, Forgejo developers and other like-minded communities (e.g. disroot). And we trust their trusted instances, too. This repeats until a certain configurable threshold. Everyone else needs to be approved by moderators first, before the spam can reach our users. This keeps a balance between protection and manual effort.

Next steps

The future is uncertain, but we do not fear it. It is now easier than ever to collaborate with Forgejo on moderation tooling, in part due to the decision for a hard fork. If you run your own instance, we are looking for your feedback. Extending the admin tooling is my personal focus for the next time, and it is the base for successful federated moderation, too.

If you are interested in helping on any front in Forgejo, please get in touch. Your help is more than welcome. Let's together shape the future of software development!


  1. You can read more about the background in this article, including the involvement of Discord in the matter.