this post was submitted on 25 Nov 2024
112 points (79.8% liked)
Linuxsucks
185 readers
293 users here now
Rules:
- FOSS advocates and Linux evangelists aren't welcome. -We ask that you block us.
- Moderation is heavy handed. Try to stay on topic.
- No Complaining Mute the sub if users, content, or rules bother you
founded 1 month ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
it was an odd choice for fedora to default to reboots for system updates. I can dnf update to avoid it but I keep forgetting.
I think they're preparing everyone for immutable installations but they're a long way off from that.
When the kernel is updated, a reboot is necessary to load the new version. Improvements and security fixes aren't implemented until a reboot. Services and daemons likewise need restart to ensure they're working. When libraries are updated (OpenSSL or Gnutls for example), they might get run with the wrong version of an application.
i said system updates, not kernel updates.
Not really.
There is a good reason Windows does it.
To guarantee the running state of the system, and to ensure everything runs using the components and versions they were designed to use
No. Its because windows read-locks everything.
In Linux we have post-install scripts to ensure relevant stuff gets restarted as long as it was installed properly. (The improperly installed shit can go fuck itself)
The only time you need to reboot is when you've upgraded your kernel without kstuff/ksplice or you've glanced at dbus a little sideways.
sigh
Post-Install scripts don't fix 100% of the issue and dynamic lazy linking is a real thing.
The read-only thing really isn't the main issue here, and everyone including windows has a way to do post installation stuff, and has a service manager
As an example, a few years ago my system kept erroring due to a gstreamer update. Reboot fixed it (I only remember it because the bug reports were only recently closed).
Probably because apps had half loaded old versions, and were lazy linking new versions.
Furthermore, without doing this, self-recovery is difficult. Because if you update something today, and reboot a week later and your system doesn't boot, you have no idea what caused it. You'd have to keep rolling back. If you do it on reboot, you can snapshot, update, and if system fails, then rollback automatically after losing nothing.
There's lots of good reasons
I can easily install multiple versions of coreutils and glibc without issue.
Cool. You do that
Are you going to install multiple versions of every library?
What if it's a security fix and it's in issue in your desktop environment, etc
Coreutils and glibc aren't the only libraries on your system
Some apps might use static linking too so might need to be restarted. Other libraries might be loaded long after the app is started. If you swap libraries half way, it's not great too
What if you're copying large files half way and run out of space. That nuked my Linux mint install
Linux distros don't just copy Windows. They wouldn't put in the extra effort unless they have to.
Do you think a bunch of developers sit around and don't evaluate why they're doing things? And instead just copy from Windows? Nah mate. They do it for a reason
The cool thing about doing it this way is if boot fails, you can rollback easily too. If you're installing core components randomly, your system might only fall to boot a week later