this post was submitted on 26 Jul 2024
160 points (94.0% liked)

Selfhosted

40313 readers
230 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I see so many posts and people who run NGINX as their reverse proxy. Why though? There's HAProxy and Apache, with Caddy being a simpler option.

If you're starting from scratch, why did you pick/are you picking NGINX over the others?

you are viewing a single comment's thread
view the rest of the comments
[–] db0@lemmy.dbzer0.com 7 points 3 months ago (1 children)

Not sure why you say haproxy can't serve python. I do it all the time. You just use something like python waitress and then point haproxy to it's port.

[–] Max_P@lemmy.max-p.me 5 points 3 months ago (1 children)

It depends on what you use on the Python side. Classically that would have been uWSGI or one of the *SGI interfaces, and lately ASGI.

Sure, one can totally make Python apps that serve HTTP directly. The same can be done with PHP (and Ruby and others) as well, but most people still run their PHP through PHP-FPM over FastCGI because you can offload a lot of the work to the much faster NGINX side. A fair amount of apps make use of X-Accel-Redirect to serve private files, so you don't tie up a PHP worker for an hour serving the user's 2GB file.

But yes, as those languages all move to async computing and away from worker pools, it's more common to see those serve HTTP directly, and there's less and less need for a proxy that supports those other protocols. The async event loop is what made NGINX special when it came out, so naturally languages that moves to that model greatly reduce the need for that as well, they too can easily handle thousands of concurrent connections no problems. Plus these days people slap a CDN in front anyway so static file performance doesn't matter quite as much.

[–] db0@lemmy.dbzer0.com 5 points 3 months ago

Ye pretty much. I was just quite astounded at that statement as the AI Horde is basically just a lot of python processes behind a very low powered haproxy server.

Personally, I understand people like to stay with the familiar, which is perfectly fine for a non-demanding service, but when something becomes demanding, I find the haproxy specialization serves better. I wish lemmy deployment by default utilized haproxy myself.