A derp a day keeps the blackhat away.

submitted 7 months ago* (last edited 7 months ago) by throws_lemy@lemmy.nz to c/reddit@lemmy.world

AccidentalRenaissance has no active moderators due to Reddit's unprecedented API changes, and has thus been privated to prevent vandalism.

Resignation letters:

Openminded_Skeptic - https://imgur.com/a/WwzQcac

VoltasPistol - https://imgur.com/a/lnHSM4n

We welcome you to join us in our new homes:



Thank you for all your support!

Original post from r/ModCoord

submitted 7 months ago by Ertebolle@kbin.social to c/reddit@lemmy.world
submitted 8 months ago* (last edited 8 months ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

Looks like it works.

Edit still see some performance issues. Needs more troubleshooting

Update: Registrations re-opened We encountered a bug where people could not log in, see https://github.com/LemmyNet/lemmy/issues/3422#issuecomment-1616112264 . As a workaround we opened registrations.


First of all, I would like to thank the Lemmy.world team and the 2 admins of other servers @stanford@discuss.as200950.com and @sunaurus@lemm.ee for their help! We did some thorough troubleshooting to get this working!

The upgrade

The upgrade itself isn't too hard. Create a backup, and then change the image names in the docker-compose.yml and restart.

But, like the first 2 tries, after a few minutes the site started getting slow until it stopped responding. Then the troubleshooting started.

The solutions

What I had noticed previously, is that the lemmy container could reach around 1500% CPU usage, above that the site got slow. Which is weird, because the server has 64 threads, so 6400% should be the max. So we tried what @sunaurus@lemm.ee had suggested before: we created extra lemmy containers to spread the load. (And extra lemmy-ui containers). And used nginx to load balance between them.

Et voilà. That seems to work.

Also, as suggested by him, we start the lemmy containers with the scheduler disabled, and have 1 extra lemmy running with the scheduler enabled, unused for other stuff.

There will be room for improvement, and probably new bugs, but we're very happy lemmy.world is now at 0.18.1-rc. This fixes a lot of bugs.

Lemmy World outages (lemmy.world)
submitted 6 months ago* (last edited 6 months ago) by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

Hello there!

It has been a while since our last update, but it's about time to address the elephant in the room: downtimes. Lemmy.World has been having multiple downtimes a day for quite a while now. And we want to take the time to address some of the concerns and misconceptions that have been spread in chatrooms, memes and various comments in Lemmy communities.

So let's go over some of these misconceptions together.

"Lemmy.World is too big and that is bad for the fediverse".

While one thing is true, we are the biggest Lemmy instance, we are far from the biggest in the Fediverse. If you want actual numbers you can have a look here: https://fedidb.org/network

The entire Lemmy fediverse is still in its infancy and even though we don't like to compare ourselves to Reddit it gives you something comparable. The entire amount of Lemmy users on all instances combined is currently 444,876 which is still nothing compared to a medium sized subreddit. There are some points that can be made that it is better to spread the load of users and communities across other instances, but let us make it clear that this is not a technical problem.

And even in a decentralised system, there will always be bigger and smaller blocks within; such would be the nature of any platform looking to be shaped by its members. 

"Lemmy.World should close down registrations"

Lemmy.World is being linked in a number of Reddit subreddits and in Lemmy apps. Imagine if new users land here and they have no way to sign up. We have to assume that most new users have no information on how the Fediverse works and making them read a full page of what's what would scare a lot of those people off. They probably wouldn't even take the time to read why registrations would be closed, move on and not join the Fediverse at all. What we want to do, however, is inform the users before they sign up, without closing registrations. The option is already built into Lemmy but only available on Lemmy.ml - so a ticket was created with the development team to make these available to other instance Admins. Here is the post on Lemmy Github.

Which brings us to the third point:

"Lemmy.World can not handle the load, that's why the server is down all the time"

This is simply not true. There are no financial issues to upgrade the hardware, should that be required; but that is not the solution to this problem.

The problem is that for a couple of hours every day we are under a DDOS attack. It's a never-ending game of whack-a-mole where we close one attack vector and they'll start using another one. Without going too much into detail and expose too much, there are some very 'expensive' sql queries in Lemmy - actions or features that take up seconds instead of milliseconds to execute. And by by executing them by the thousand a minute you can overload the database server.

So who is attacking us? One thing that is clear is that those responsible of these attacks know the ins and outs of Lemmy. They know which database requests are the most taxing and they are always quick to find another as soon as we close one off. That's one of the only things we know for sure about our attackers. Being the biggest instance and having defederated with a couple of instances has made us a target.  

"Why do they need another sysop who works for free"

Everyone involved with LW works as a volunteer. The money that is donated goes to operational costs only - so hardware and infrastructure. And while we understand that working as a volunteer is not for everyone, nobody is forcing anyone to do anything. As a volunteer you decide how much of your free time you are willing to spend on this project, a service that is also being provided for free.

We will leave this thread pinned locally for a while and we will try to reply to genuine questions or concerns as soon as we can.

Winning is relative (sh.itjust.works)
submitted 7 months ago by bpeu@sh.itjust.works to c/memes@lemmy.ml
submitted 8 months ago* (last edited 8 months ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

Another day, another update.

More troubleshooting was done today. What did we do:

  • Yesterday evening @phiresky@phiresky@lemmy.world did some SQL troubleshooting with some of the lemmy.world admins. After that, phiresky submitted some PRs to github.
  • @cetra3@lemmy.ml created a docker image containing 3PR's: Disable retry queue, Get follower Inbox Fix, Admin Index Fix
  • We started using this image, and saw a big drop in CPU usage and disk load.
  • We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a return 404 in nginx conf for /api/v3/ws.
  • We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs
  • We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set ~~proxy_next_upstream timeout;~~ max_fails=5 in nginx.

Currently we're running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the ~~proxy_next_upstream timeout;~~ max_fails=5 workaround but for now it seems to hold with 1.

Thanks to @phiresky@lemmy.world , @cetra3@lemmy.ml , @stanford@discuss.as200950.com, @db0@lemmy.dbzer0.com , @jelloeater85@lemmy.world , @TragicNotCute@lemmy.world for their help!

And not to forget, thanks to @nutomic@lemmy.ml and @dessalines@lemmy.ml for their continuing hard work on Lemmy!

And thank you all for your patience, we'll keep working on it!

Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs.

Edit So as soon as the US folks wake up (hi!) we seem to need the second Lemmy container for performance. So that's now started, and I noticed the proxy_next_upstream timeout setting didn't work (or I didn't set it properly) so I used max_fails=5 for each upstream, that does actually work.

submitted 7 months ago* (last edited 7 months ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

While I was asleep, apparently the site was hacked. Luckily, (big) part of the lemmy.world team is in US, and some early birds in EU also helped mitigate this.

As I am told, this was the issue:

  • There is an vulnerability which was exploited
  • Several people had their JWT cookies leaked, including at least one admin
  • Attackers started changing site settings and posting fake announcements etc

Our mitigations:

  • We removed the vulnerability
  • Deleted all comments and private messages that contained the exploit
  • Rotated JWT secret which invalidated all existing cookies

The vulnerability will be fixed by the Lemmy devs.

Details of the vulnerability are here

Many thanks for all that helped, and sorry for any inconvenience caused!

Update While we believe the admins accounts were what they were after, it could be that other users accounts were compromised. Your cookie could have been 'stolen' and the hacker could have had access to your account, creating posts and comments under your name, and accessing/changing your settings (which shows your e-mail).

For this, you would have had to be using lemmy.world at that time, and load a page that had the vulnerability in it.

F#€k $pez (lemmy.ml)
submitted 3 months ago by Grayox@lemmy.ml to c/memes@lemmy.ml
submitted 8 months ago by Awa@lemmy.world to c/futurama@lemmy.world

Welcome to the fediverse!

those ppl... (feddit.de)
submitted 4 months ago by EherVielleicht@feddit.de to c/memes@lemmy.ml
Twitter users right now (sh.itjust.works)
submitted 7 months ago by EmoDuck@sh.itjust.works to c/memes@lemmy.ml
submitted 5 months ago by lwadmin@lemmy.world to c/lemmyworld@lemmy.world

Hello World!

As we've all known and talked about quite a lot, we previously blocked several piracy-focused communities. These communities, as announced, were:

In our removal announcement, we stated that we will continue to look into this more in detail, and re-allow these communities if and when we deem it safe. It was a solid concern at the time, because we were already receiving takedown requests as well as constant attacks, and didn't want to put our volunteer team at risk. We had zero measures in place, and the tools we had were insufficient to deal with anything at scale.

Well, after back and forth with some very cool people, and starting to have proper measures as well as tooling to protect ourselves, we decided it's time to welcome these communities back again. Long live the IT nerds!

We know it's been a rough ride with everything, and we'd like to thank every one of you who were understanding of us, and stayed with us all the way. Please know that as users, you are what makes this platform what it is, and damned we be if we ever forget it.

With love, and as always, stay safe in the high seas!

Lemmy.world Team


Hotel > AirBNB (lemmy.ml)
submitted 8 months ago by snixyz@lemmy.ml to c/memes@lemmy.ml
submitted 7 months ago* (last edited 7 months ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

As requested by some users: 'old' style now accessible via https://old.lemmy.world

Code can be found here: https://github.com/rystaf/mlmym , created by Ryan (Is he here?) (Yes he appears to be! @nnrx@sh.itjust.works ! Thanks for this awesome front-end!)

submitted 7 months ago by UhBell@lemmy.world to c/memes@lemmy.ml
submitted 6 months ago by Spudwart@lemmy.world to c/memes@lemmy.ml
submitted 1 month ago by Custoslibera@lemmy.world to c/memes@lemmy.ml

It's not just lemmy that's benefiting from Elon Musk.

submitted 6 months ago by soyagi@yiffit.net to c/technology@lemmy.world
submitted 5 months ago by db0@lemmy.dbzer0.com to c/adhd@lemmy.dbzer0.com
Plane goes brrrr (pawb.social)
submitted 5 months ago by black0ut@pawb.social to c/memes@lemmy.ml
view more: next ›


4 readers
0 users here now

This started out as a private instance. Not sure what happened.

Also browse via...

founded 8 months ago