SpaceCadet

joined 11 months ago
[–] SpaceCadet@feddit.nl 12 points 1 day ago

Yup, but that's already mentioned in the article. Thought I'd give people the exact userpref, so they can modify their custom user.js if they have one.

[–] SpaceCadet@feddit.nl 37 points 1 day ago (3 children)

To disable:

user_pref("dom.private-attribution.submission.enabled", false);
[–] SpaceCadet@feddit.nl 2 points 3 days ago

I just booted up a Windows 2000 VM to check ... it's there in the disk management tool. It looks a bit weird with the drive icon in Explorer, but ok.

[–] SpaceCadet@feddit.nl 2 points 3 days ago (2 children)

Yeah, I believe that was introduced as far back as Windows 2000. It never really caught on though.

[–] SpaceCadet@feddit.nl -1 points 4 days ago* (last edited 4 days ago)

Well keep dreaming then. If that is what keeping you on Windows, you will never leave Windows. Nobody in their right mind is ever going to create a new OS with drive letters.

/thread

[–] SpaceCadet@feddit.nl 1 points 4 days ago* (last edited 4 days ago) (3 children)

The thing is, you are absolutely free to use a /c,/d,/e mounting scheme, but you are not shackled to it like you are in Windows. Personally I like to organize my data in one big root (/) file system on my NVME drive and then /data for my bulk storage on HDD and /nas for my NAS shares. I never have any problems knowing where my data is.

BTW, I notice all your complaints revolve around "OMG it's different" and "OMG the user can choose to do things differently... so complicated". That is kind of the point of Linux you know?

At some point you just have to accept that it's different and move on, or decide that it's too complicated for you and use something else.

BTW, I wonder why people never make this complaint about Apple devices? It also has a hierarchical file structure without drive letters, after all it is also a Unix variant.

[–] SpaceCadet@feddit.nl 9 points 4 days ago* (last edited 4 days ago) (9 children)

I know the filesystem is simple to Linux users, but the semantic form of physical drives getting a letter always made more sense to me.

That's one of the things that semi-experienced Windows users need to wrap their head around, but I strongly disagree that drive letters are somehow inferior to a hierarchical file system structure. I mean, the A:, B:, C: ... convention was originally just intended for the first IBM PC with 1 or 2 floppy drives. It was never intended to support complex storage configurations, whereas the hierarchical file system was designed for Unix systems that had to handle multiple magnetic drives from the start. It is a much more flexible system to organize your file storage.

On Linux, as best I understand it, if I have three drives, two of them are at /dev/hdd0 and hdd1. But they’re not actually there.

That's because there is a difference between a block device and a mounted file system. Windows just obscures that difference from you with its archaic drive mapping system.

All your block devices and partitions on your block devices will be in /dev with a meaningful name. You can list them with the lsblk command. If a partition contains a file system that Linux knows how to use, you can mount it anywhere you like.

they’re accessed at /media/hdd0 after mounting them

No that's not "convention" at all. Some desktop environments may decide to mount undefined drives there, but there really is no convention, ultimately you mount it where you want it to be mounted.

If you place an item in /home/documents/notporn, then who knows which drive it’s on because you don’t know what symlinks someone set up to make that folder.

If your unsure, df /home/documents/notporn should tell you exactly what drive it's on, but ultimately it's up to you to know how you've organized your storage.

BTW I've said this before, but Linux is probably harder for users who know Windows just well enough to be dangerous than it is for relative beginners, because there are so many concepts and things they take for granted that they have to unlearn.

[–] SpaceCadet@feddit.nl 1 points 4 days ago

So they put in a Gotek drive like I did with my retro PC?

[–] SpaceCadet@feddit.nl 2 points 4 days ago* (last edited 4 days ago)

It's from a Devo song called Mongoloid.

It goes:

Mongoloid, he was a mongoloid

Happier than you and me

[–] SpaceCadet@feddit.nl 3 points 4 days ago* (last edited 4 days ago) (3 children)

Asian is called mongoloid.

One archeologist to another: looks like this person was ... happier than you and me.

( •_•)>⌐■-■

(⌐■_■)

[–] SpaceCadet@feddit.nl 21 points 4 days ago (21 children)

Installing Linux has never been particularly difficult, not in the last 15 or even 20 years anyway. I've always found it easier and more straightforward than the contemporary Windows installation process.

The challenging part is wrapping your head around the Linux/Unix way of doing things when things can't be done through the GUI with just a few clicks.

[–] SpaceCadet@feddit.nl 2 points 4 days ago* (last edited 4 days ago) (1 children)

Well I'm sure it's been in use for a while, but not in mainstream internet lingo is my point.

Speaking for myself, I only learned about this term a year or so ago, because I remember looking it up, and I remember thinking: huh, so there's a word for that now. Since then, I've seen it come up several times, almost always in greentext posts like this one.

5
submitted 1 month ago* (last edited 1 month ago) by SpaceCadet@feddit.nl to c/fediverse@lemmy.world
 

I feel like we need to talk about Lemmy's massive tankie censorship problem. A lot of popular lemmy communities are hosted on lemmy.ml. It's been well known for a while that the admins/mods of that instance have, let's say, rather extremist and onesided political views. In short, they're what's colloquially referred to as tankies. This wouldn't be much of an issue if they didn't regularly abuse their admin/mod status to censor and silence people who dissent with their political beliefs and for example, post things critical of China, Russia, the USSR, socialism, ...

As an example, there was a thread today about the anniversary of the Tiananmen Massacre. When I was reading it, there were mostly posts critical of China in the thread and some whataboutist/denialist replies critical of the USA and the west. In terms of votes, the posts critical of China were definitely getting the most support.

I posted a comment in this thread linking to "https://archive.ph/2020.07.12-074312/https://imgur.com/a/AIIbbPs" (WARNING: graphical content), which describes aspects of the atrocities that aren't widely known even in the West, and supporting evidence. My comment was promptly removed for violating the "Be nice and civil" rule. When I looked back at the thread, I noticed that all posts critical of China had been removed while the whataboutist and denialist comments were left in place.

This is what the modlog of the instance looks like:

Definitely a trend there wouldn't you say?

When I called them out on their one sided censorship, with a screenshot of the modlog above, I promptly received a community ban on all communities on lemmy.ml that I had ever participated in.

Proof:

So many of you will now probably think something like: "So what, it's the fediverse, you can use another instance."

The problem with this reasoning is that many of the popular communities are actually on lemmy.ml, and they're not so easy to replace. I mean, in terms of content and engagement lemmy is already a pretty small place as it is. So it's rather pointless sitting for example in /c/linux@some.random.other.instance.world where there's nobody to discuss anything with.

I'm not sure if there's a solution here, but I'd like to urge people to avoid lemmy.ml hosted communities in favor of communities on more reasonable instances.

1
submitted 7 months ago* (last edited 7 months ago) by SpaceCadet@feddit.nl to c/debian@lemmy.ml
 

I have a small server in my closet which is running 4 Debian 12 virtual machines under kvm/libvirt. The virtual machines have been running fine for months. They have unattended-upgrades enabled, and I generally leave them alone. I only reboot them periodically, so that the latest kernel upgrades get applied.

All the machines have an LVM configuration. Generally it's a debian-vg volume group on /dev/vda for the operating system, which has been configured automatically by the installer, and a vgdata volume group on /dev/vdb for everything else. All file systems are simple ext4, so nothing fancy. (*)

A couple of days ago, one of the virtual machines didn't come up after a routine reboot and dumped me into a maintenance shell. It complained that it couldn't mount filesystems that were on vgdata. First I tried simply rebooting the machine, but it kept dumping me into maintenance. Investigating a bit deeper, I noticed that vgdata and the block device /dev/vdb were detected but the volume group was inactive, and none of the logical volumes were found. I ran vgchange -a y vgdata and that brought it back online. After several test reboots, the problem didn't reoccur, so it seemed to be fixed permanently.

I was willing to write it off as a glitch, but then a day later I rebooted one of the other virtual machines, and it also dumped me into maintenance with the same error on its vgdata. Again, running vgchange -y vgdata fixed the problem. I think two times in two days the same error with different virtual machines is not a coincidence, so something is going on here, but I can't figure out what.

I looked at the host logs, but I didn't find anything suspicious that could indicate a hardware error for example. I should also mention that the virtual disks of both machines live on entirely different physical disks: VM1 is on an HDD and VM2 on an SSD.

I also checked if these VMs had been running kernel 6.1.64-1 with the recent ext4 corruption bug at any point, but this does not appear to be the case.

Below is an excerpt of the systemd journal on the failed boot of the second VM, with what I think are the relevant parts. Full pastebin of the log can be found here.

Dec 16 14:40:35 omega lvm[307]: PV /dev/vdb online, VG vgdata is complete.
Dec 16 14:40:35 omega lvm[307]: VG vgdata finished
...
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvbinaries.device: Job dev-vgdata-lvbinaries.device/start timed out.
Dec 16 14:42:05 omega systemd[1]: Timed out waiting for device dev-vgdata-lvbinaries.device - /dev/vgdata/lvbinaries.
Dec 16 14:42:05 omega systemd[1]: Dependency failed for binaries.mount - /binaries.
Dec 16 14:42:05 omega systemd[1]: Dependency failed for local-fs.target - Local File Systems.
Dec 16 14:42:05 omega systemd[1]: local-fs.target: Job local-fs.target/start failed with result 'dependency'.
Dec 16 14:42:05 omega systemd[1]: local-fs.target: Triggering OnFailure= dependencies.
Dec 16 14:42:05 omega systemd[1]: binaries.mount: Job binaries.mount/start failed with result 'dependency'.
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvbinaries.device: Job dev-vgdata-lvbinaries.device/start failed with result 'timeout'.
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvdata.device: Job dev-vgdata-lvdata.device/start timed out.
Dec 16 14:42:05 omega systemd[1]: Timed out waiting for device dev-vgdata-lvdata.device - /dev/vgdata/lvdata.
Dec 16 14:42:05 omega systemd[1]: Dependency failed for data.mount - /data.
Dec 16 14:42:05 omega systemd[1]: data.mount: Job data.mount/start failed with result 'dependency'.
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvdata.device: Job dev-vgdata-lvdata.device/start failed with result 'timeout'.

(*) For reference, the disk layout on the affected machine is as follows:

# lsblk 
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
vda                   254:0    0   20G  0 disk 
├─vda1                254:1    0  487M  0 part /boot
├─vda2                254:2    0    1K  0 part 
└─vda5                254:5    0 19.5G  0 part 
  ├─debian--vg-root   253:2    0 18.6G  0 lvm  /
  └─debian--vg-swap_1 253:3    0  980M  0 lvm  [SWAP]
vdb                   254:16   0   50G  0 disk 
├─vgdata-lvbinaries   253:0    0   20G  0 lvm  /binaries
└─vgdata-lvdata       253:1    0   30G  0 lvm  /data

# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  debian-vg   1   2   0 wz--n- <19.52g    0 
  vgdata      1   2   0 wz--n- <50.00g    0 

# pvs
  PV         VG        Fmt  Attr PSize   PFree
  /dev/vda5  debian-vg lvm2 a--  <19.52g    0 
  /dev/vdb   vgdata    lvm2 a--  <50.00g    0 

# lvs
  LV         VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root       debian-vg -wi-ao----  18.56g                                                    
  swap_1     debian-vg -wi-ao---- 980.00m                                                    
  lvbinaries vgdata    -wi-ao----  20.00g                                                    
  lvdata     vgdata    -wi-ao---- <30.00g 
view more: next ›