N0x0n

joined 7 months ago
[–] N0x0n@lemmy.ml 2 points 1 day ago (3 children)

Are there any benefits in switching to emacs (or vim?) for casual "power users"? I mean, I do a lot of editing with nano over ssh from a linux desktop or macOS and I get mostly frustrated over shortcuts while editing configuration files and tiny easy to use bash scripts.

I already had the cognitive dissonance if I should learn either emacs or vim to improve editing speed, but still nano fullfills all my needs :/.

Seeing your post, raises the same question again. Care to give some personal experience?

Thanks !

[–] N0x0n@lemmy.ml 2 points 2 days ago

Hmmmm... Fractals 🀀

[–] N0x0n@lemmy.ml 2 points 6 days ago* (last edited 6 days ago) (1 children)

I academically know it’s bad and wildly problematic

What do you mean by that? How is it bad trying the save species from aquariums? Or trying the save a tribe from their own extinction because profit?

I'm mean those are good morals that should be taught at school, IMO way more important than history class which actually tells us how dumb we were and how dumb we still are.

[EDITED]

Maybe I got your comment wrong if so care to elaborate?

Thank you.

[–] N0x0n@lemmy.ml 1 points 6 days ago* (last edited 6 days ago)

Not saying it's better than a native app but it's probably more secure than an extension.

One benefit I could think of is customization of your configuration. I'm pratically a newbie in networking so take everything with a grain of salt, because a wrongly configured network device is as bad a not having one.

However, being able to re-route everything to a corresponding wireguard tunnel adding specific rules to each devices, give you more controle of your network flow (Yes this is more advanced stuff and I only scratched the surface of what is possible). There's way more to it and I lack the proper knowledge, but reading here and there, suggests that extensions are really bad for security/privacy. Also, the more addons you have, the more fringerprintable you are (yes i'm probably over simplifing...)

Sorry if I lack the technical terms, I'm just a tinkerer and like learning new stuff. If there's a native app for every device go for it, otherwise I would suggest to find a way to re-route your traffic through a tunnel without the help of a browser extension.

But hey I'm just some random on the web without any degree, so whatever 🫠

[–] N0x0n@lemmy.ml 4 points 6 days ago (2 children)

Just my 2cent don't take is to seriously, but having an extansion to act as VPN is a bad idea IMO. Same goes for password managers.

I would rather suggest to install wireguard on your machine and tunnel all your traffic to protonVPN with a config file you can download from them.

But that adds extra work to put into place (a few iptables lines) and I get why extensions are popular (ease of install and forget).

Sorry if it doesn't add something to your actual question, but we shouldn't rely to much on extensions, those are mostly open holes for privacy and security.

[–] N0x0n@lemmy.ml 1 points 6 days ago

I like linkding alot for it's simplicity, but can't the hell of me make a good/clean instance... It gets messy very quickly if you save some interesting links which are not directly connected to something or if you have 2 different subjects in the same instance.

Something like group tagging would make alot of sense :/

[–] N0x0n@lemmy.ml 1 points 1 week ago

That's something I have to look into. I think I gave it a try but failed. Having everything on a old spare laptop doesn't help either (docker, vpn, dns resolver, pihole,firewall...).

Can't wait to put my n100 as router into my network and give it a second try :)))

 

Hi everyone !

Intro

Was a long ride since 3 years ago I started my first docker container. Learned a lot from how to build my custom image with a Dockerfile, loading my own configurations files into the container, getting along with docker-compose, traefik and YAML syntax... and and and !

However while tinkering with vaultwarden's config and changing to postgresSQL there's something that's really bugging me...

Questions


  • How do you/devs choose which database to use for your/their application? Are there any specific things to take into account before choosing one over another?

  • Does consistency in database containers makes sense? I mean, changing all my containers to ONLY postgres (or mariaDB whatever)?

  • Does it make sense to update the database image regularly? Or is the application bound to a specific version and will break after any update?

  • Can I switch between one over another even if you/devs choose to use e.g. MariaDB ? Or is it baked/hardcoded into the application image and switching to another database requires extra programming skills?

Maybe not directly related to databases but that one is also bugging me for some time now:

  • What's redis role into all of this? I can't the hell of me understand what is does and how it's linked between the application and database. I know it's supposed to give faster access to resources, but If I remember correctly, while playing around with Nextcloud, the redis container logs were dead silent, It seemed very "useless" or not active from my perspective. I'm always wondering "Humm redis... what are you doing here?".

Thanks :)

[–] N0x0n@lemmy.ml 3 points 3 weeks ago (1 children)

EndeavourOS is growing πŸ‘ !!

What's more impressive is the sudden and massive Manjaro drop out! Something bad must have happend during 2020-2022.

[–] N0x0n@lemmy.ml 2 points 3 weeks ago* (last edited 3 weeks ago)

Heyho :)

Back again ! Thanks you for helping troubleshoot !!! It's was actually something else... (my reading skills) !

ExifTool evaluates the command-line arguments left to right, and latter assignments to the same tag override earlier ones.

That's why everything was messed up, because it took the last assignment to write the directory date... I feel quite stupid/bad and added unnecessarily noise to Phil Harvey's forum :/.


Thanks to you and another user I greatly improved my script and went from 16min to 1min21s for the exact same batch ! He/she proposed to use the parallel package alongside the script to make full use of my CPU cores ! Also, another way to loop through my files. It's a mix of both your ideas and it looks so much better ! Thank you very much for your help ! 😘

#!/bin/bash

if [ $# -eq 0 ]; then
    echo "Usage: $0 <filename>"
    exit 1
fi

# Concatenate all arguments into one string for the filename, so calling "./script.sh /path/with spaces.jpg" should work without quoting
filename="$*"

start_range=20170101
end_range=20201230

FIRST_DATE=$(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -DateTimeOriginal -CreateDate -FileModifyDate -DateAcquired -ModifyDate "$filename" | tr -d '-' | awk '{print $1}')
#echo $FIRST_DATE

if [[ "$FIRST_DATE" != '' ]] && [[ "$FIRST_DATE" -gt $start_range ]] && [[ "$FIRST_DATE" -lt $end_range ]]; then
        /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -d %Y/%B '-directory<$DateAcquired/' '-directory<$ModifyDate/' '-directory<$FileModifyDate/' '-directory<$CreateDate/' '-directory<$DateTimeOriginal/' '-FileName=%f%-c.%e' "$filename"

else
        echo "Error"

fi

time find /home/user/Pictures/ -type f | parallel ./exif-test.bash

[–] N0x0n@lemmy.ml 2 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Hello :)

Thanks again for your input and spoon feeding me ! I took some time to understand but when I got it I felt excited. Because I'm unable to further improve to script itself using my hardware is such a great idea !!! I would never though of that !!

And after a single test drive on a sample, it was BLAZING FAST (3min on a 2200 sample thanks CPU cores :D)! Thank you very much for sharing your knowledge !! You brought my hope back.

Thanks 😘 πŸ‘ πŸ™

Edit:

I rewrote the script and combined your solution with the script

#!/bin/bash

if [ $# -eq 0 ]; then
    echo "Usage: $0 <filename>"
    exit 1
fi

# Concatenate all arguments into one string for the filename, so calling "./script.sh /path/with spaces.jpg" should work without quoting
filename="$*"

start_range=20170101
end_range=20201230

FIRST_DATE=$(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -DateTimeOriginal -CreateDate -FileModifyDate -DateAcquired -ModifyDate "$filename" | tr -d '-' | awk '{print $1}')
#echo $FIRST_DATE

if [[ "$FIRST_DATE" != '' ]] && [[ "$FIRST_DATE" -gt $start_range ]] && [[ "$FIRST_DATE" -lt $end_range ]]; then
        /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -d %Y/%B '-directory<$DateAcquired/' '-directory<$ModifyDate/' '-directory<$FileModifyDate/' '-directory<$CreateDate/' '-directory<$DateTimeOriginal/' '-FileName=%f%-c.%e' "$filename"

else
        echo "Error"

fi

running time find /home/user/Pictures/ -type f | parallel ./exif-test.bash took 1min21s for 2200 files without any errors and everything is perfectly organized !!!! Thanks a lot !!!!

 

Edit

After a long process of roaming the web, re-runs and troubleshoot the script with this wonderful community, the script is functional and does what it's intended to do. The script itself is probably even further improvable in terms of efficiency/logic, but I lack the necessary skills/knowledge to do so, feel free to copy, edit or even propose a more efficient way of doing the same thing.

I'm greatly thankful to @AernaLingus@hexbear.net, @GenderNeutralBro@lemmy.sdf.org, @hydroptic@sopuli.xyz and Phil Harvey (exiftool) for their help, time and all the great idea's (and spoon-feeding me with simple and comprehensive examples ! )

How to use

Prerequisites:

  • parallel package installed on your distribution

Copy/past the below script in a file and make it executable. Change the start_range/end_range to your needs and install the parallel package depending on your OS and run the following command:

time find /path/to/your/image/directory/ -type f | parallel ./script-name.sh

This will order only the pictures from your specified time range into the following structure YEAR/MONTH in your current directory from 5 different time tag/timestamps (DateTimeOriginal, CreateDate, FileModifyDate, ModifyDate, DateAcquired).

You may want to swap ModifyDate and FileModifyDate in the script, because ModifyDate is more accurate in a sense that FileModifyDate is easily changeable (as soon as you make some modification to the pictures, this will change to your current date). I needed that order for my specific use case.

From: '-directory<$DateAcquired/' '-directory<$ModifyDate/' '-directory<$FileModifyDate/' '-directory<$CreateDate/' '-directory<$DateTimeOriginal/'

To: '-directory<$DateAcquired/' '-directory<$FileModifyDate/' '-directory<$ModifyDate/' '-directory<$CreateDate/' '-directory<$DateTimeOriginal/'

As per exfitool's documentation:

ExifTool evaluates the command-line arguments left to right, and latter assignments to the same tag override earlier ones.

#!/bin/bash

if [ $# -eq 0 ]; then
    echo "Usage: $0 <filename>"
    exit 1
fi

# Concatenate all arguments into one string for the filename, so calling "./script.sh /path/with spaces.jpg" should work without quoting
filename="$*"

start_range=20170101
end_range=20201230

FIRST_DATE=$(exiftool -m -d '%Y%m%d' -T -DateTimeOriginal -CreateDate -FileModifyDate -DateAcquired -ModifyDate "$filename" | tr -d '-' | awk '{print $1}')

if [[ "$FIRST_DATE" != '' ]] && [[ "$FIRST_DATE" -gt $start_range ]] && [[ "$FIRST_DATE" -lt $end_range ]]; then
        exiftool -api QuickTimeUTC -d %Y/%B '-directory<$DateAcquired/' '-directory<$ModifyDate/' '-directory<$FileModifyDate/' '-directory<$CreateDate/' '-directory<$DateTimeOriginal/' '-FileName=%f%-c.%e' "$filename"

else
        echo "Not in the specified time range"

fi



Hi everyone !

Please no bash-shaming, I did my outmost best to somehow put everything together and make it somehow work without any prior bash programming knowledge. It took me a lot of effort and time.

While I'm pretty happy with the result, I find the execution time very slow: 16min for 2288 files.

On a big folder with approximately 50,062 files, this would take over 6Β hours !!!

If someone could have a look and give me some easy to understand hints, I would greatly appreciate it.

What Am I trying to achieve ?

Create a bash script that use exiftool to stripe the date from images in a readable format (20240101) and compare it with an end_range to order only images from that specific date range (ex: 2020-01-01 -> 2020-12-30).

Also, some images lost some EXIF data, so I have to loop through specific time fields:

  • DateTimeOriginal
  • CreateDate
  • FileModifyDate
  • DateAcquired

The script in question

#!/bin/bash

shopt -s globstar

folder_name=/home/user/Pictures
start_range=20170101
end_range=20180130


for filename in $folder_name/**/*; do

	if [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -DateTimeOriginal "$filename") =~ ^[0-9]+$ ]]; then
		DateTimeOriginal=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -DateTimeOriginal "$filename")
	        if  [ "$DateTimeOriginal" -gt $start_range ] && [ "$DateTimeOriginal" -lt $end_range ]; then
			/usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$DateTimeOriginal/' '-FileName=%f%-c.%e' "$filename"
			echo "Found a value"
		echo "Okay its $(tput setab 22)DateTimeOriginal$(tput sgr0)"

		fi

        elif [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -CreateDate "$filename") =~ ^[0-9]+$ ]]; then
                CreateDate=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -CreateDate "$filename")
                if  [ "$CreateDate" -gt $start_range ] && [ "$CreateDate" -lt $end_range ]; then
                        /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$CreateDate/' '-FileName=%f%-c.%e' "$filename"
                        echo "Found a value"
                echo "Okay its $(tput setab 27)CreateDate$(tput sgr0)"
                fi

        elif [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -FileModifyDate "$filename") =~ ^[0-9]+$ ]]; then
                FileModifyDate=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -FileModifyDate "$filename")
                if  [ "$FileModifyDate" -gt $start_range ] && [ "$FileModifyDate" -lt $end_range ]; then
                        /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$FileModifyDate/' '-FileName=%f%-c.%e' "$filename"
                        echo "Found a value"
                echo "Okay its $(tput setab 202)FileModifyDate$(tput sgr0)"
                fi


        elif [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -DateAcquired "$filename") =~ ^[0-9]+$ ]]; then
                DateAcquired=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -DateAcquired "$filename")
                if  [ "$DateAcquired" -gt $start_range ] && [ "$DateAcquired" -lt $end_range ]; then
                        /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$DateAcquired/' '-FileName=%f%-c.%e' "$filename"
                        echo "Found a value"
                echo "Okay its $(tput setab 172)DateAcquired(tput sgr0)"
                fi

        elif [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -ModifyDate "$filename") =~ ^[0-9]+$ ]]; then
                ModifyDate=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -ModifyDate "$filename")
                if  [ "$ModifyDate" -gt $start_range ] && [ "$ModifyDate" -lt $end_range ]; then
                        /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$ModifyDate/' '-FileName=%f%-c.%e' "$filename"
                        echo "Found a value"
                echo "Okay its $(tput setab 135)ModifyDate(tput sgr0)"
                fi

        else
                echo "No EXIF field found"

done

Things I have tried

  1. Reducing the number of if calls

But it didn't much improve the execution time (maybe a few ms?). The syntax looks way less readable but what I did, was to add a lot of or ( || ) in the syntax to reduce to a single if call. It's not finished, I just gave it a test drive with 2 EXIF fields (DateTimeOriginal and CreateDate) to see if it could somehow improve time. But meeeh :/.

#!/bin/bash

shopt -s globstar

folder_name=/home/user/Pictures
start_range=20170101
end_range=20201230

for filename in $folder_name/**/*; do

        if [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -DateTimeOriginal "$filename") =~ ^[0-9]+$ ]] || [[ $(/usr/bin/vendor_perl/exiftool -m -d '%Y%m%d' -T -CreateDate "$filename") =~ ^[0-9]+$ ]]; then
                DateTimeOriginal=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -DateTimeOriginal "$filename")
		CreateDate=$(/usr/bin/vendor_perl/exiftool -d '%Y%m%d' -T -CreateDate "$filename")
                if  [ "$DateTimeOriginal" -gt $start_range ] && [ "$DateTimeOriginal" -lt $end_range ] || [ "$CreateDate" -gt $start_range ] && [ "$CreateDate" -lt $end_range ]; then
                        /usr/bin/vendor_perl/exiftool -api QuickTimeUTC -r -d %Y/%B '-directory<$DateTimeOriginal/' '-directory<$CreateDate/' '-FileName=%f%-c.%e' "$filename"
                        echo "Found a value"
                echo "Okay its $(tput setab 22)DateTimeOriginal$(tput sgr0)"

                else
			echo "FINISH YOUR SYNTAX !!"
		fi

	fi
done

  1. Playing around with find

To recursively find my image files in all my folders I first tried the find function, but that gave me a lot of headaches... When my image file name had some spaces in it, it just broke the image path strangely... And all answers I found on the web were gibberish, and I couldn't make it work in my script properly... Lost over 4 yours only on that specific issue !

To overcome the hurdle someone suggest to use shopt -s globstar with for filename in $folder_name/**/* and this works perfectly. But I have no idea If this could be the culprit of slow execution time?

  1. Changing all [ ] into [[ ]]

That also didn't do the trick.

How to Improve the processing time ?

I have no Idea if it's related to my script or the exiftool call that makes the script so slow. This isn't that much of a complicated script, I mean, it's a comparison between 2 integers not a hashing of complex numbers.

I hope someone could guide me in the right direction :)

Thanks !

 

cross-posted from: https://lemmy.ml/post/15968883

Hello everyone ! Nobody seems to have an answer on !networking@sh.itjust.works (or maybe they are not interested because it's an enteprise network community?) and !homenetworking@selfhosted.forum seems dead?

Anyway, If anyone could guide me or direct me to the right direction, I would really appreciate it !


TL:DR

What is encapsulated into the frame that makes everyone understand: "OHHH that’s for 10.0.0.8, your docker container on bridge network br-b1de on the veth2b interface !!! "


Hi everyone !

I'm scratching my head in finding an actual answer on how virtual networking in docker actually works (mostly on the packets/frame level) or some good documentation to improve my understanding on how everything fits together.

Because I'm probably lacking the correct network terminology I made a simple network topology of my network. Don't hesitate to correct any network mistake.

In my scenario, my docker container with the virtual interface veth2b22c98 and the following ip (10.0.0.8) connects to bridge network br-b1de95b5ea89. When I curl, from my conntainer, lemmy.ml the packets/frame is send to my enp4s0 and goes through my wireguard tunnel to my VPN provider which sends back the packet/frame/handshake...

I probed every interface with tcpdump (enp4s0, wg0, br-b1,veth2b):

  • enp4s0: Every packet/frame is encapsulated into the wireguard protocol with my physical interface's IP (192.168.1.30) and no DNS is visible on that interface (like expected) and sends it out to my ISP's public IP.

  • wg0: Shows every packet/frame with the actual protocol with my wireguard's interface IP (192.168.2.1) with the destination IP of lemmy.ml (Dst: 54.36.178.108)

  • br-b1: Shows every packet/frame with the actual protocol with my containers IP (10.0.0.8) with the destination IP of lemmy.ml (Dst: 54.36.178.108)


I know there is a mix of 2 different concepts in my scenario (wireguard tunnel and virtual networking) but I really do not understand how the frame gets back to my docker container. When I look at the frames on wg0, there is no mention of either the MacAddress of my container or the actual IP of my container.

How/when/what ? is exactly happening to my frame so that it gets to the correct target between my physical interface, virtual interface, bridge ? I mean with VLAN's there's a VLAN tag on the frame, so you can easily identify with Wireshark where it should go. But here, I cannot find any clue who or what is doing the magic so the frame finds it's way back to my docker container.

What is encapsulated into the frame that makes everyone understand: "OHHH that's for 10.0.0.8, your docker container on bridge network br-b1de on the veth2b interface !!! "


Sorry for my broken English and lack of networking terminology and thank you for those who beared with me and are willing the give me some hints/proper networking lesson.

 

Edit: Whoops I just read that networking@sh.itjust.works is for enterprise networks? I hope my small homelab question doesn't break the rules? If so I will redirect my question.


Hi everyone !

I'm scratching my head in finding an actual answer on how virtual networking in docker actually works (mostly on the packets/frame level) or some good documentation to improve my understanding on how everything fits together.

Because I'm probably lacking the correct network terminology I made a simple network topology of my network. Don't hesitate to correct any network mistake.

In my scenario, my docker container with the virtual interface veth2b22c98 and the following ip (10.0.0.8) connects to bridge network br-b1de95b5ea89. When I curl, from my conntainer, lemmy.ml the packets/frame is send to my enp4s0 and goes through my wireguard tunnel to my VPN provider which sends back the packet/frame/handshake...

I probed every interface with tcpdump (enp4s0, wg0, br-b1,veth2b):

  • enp4s0: Every packet/frame is encapsulated into the wireguard protocol with my physical interface's IP (192.168.1.30) and no DNS is visible on that interface (like expected) and sends it out to my ISP's public IP.

  • wg0: Shows every packet/frame with the actual protocol with my wireguard's interface IP (192.168.2.1) with the destination IP of lemmy.ml (Dst: 54.36.178.108)

  • br-b1: Shows every packet/frame with the actual protocol with my containers IP (10.0.0.8) with the destination IP of lemmy.ml (Dst: 54.36.178.108)


I know there is a mix of 2 different concepts in my scenario (wireguard tunnel and virtual networking) but I really do not understand how the frame gets back to my docker container. When I look at the frames on wg0, there is no mention of either the MacAddress of my container or the actual IP of my container.

How/when/what ? is exactly happening to my frame so that it gets to the correct target between my physical interface, virtual interface, bridge ? I mean with VLAN's there's a VLAN tag on the frame, so you can easily identify with Wireshark where it should go. But here, I cannot find any clue who or what is doing the magic so the frame finds it's way back to my docker container.

What is encapsulated into the frame that makes everyone understand: "OHHH that's for 10.0.0.8, your docker container on bridge network br-b1de on the veth2b interface !!! "

Sorry for my broken English and lack of networking terminology and thank you for those who beared with me and are willing the give me some hints/proper networking lesson.


Edit: Changed something on my network diagram (wireguard is not in a container it's bare bone on the server) and some typo.

1
submitted 3 months ago* (last edited 3 months ago) by N0x0n@lemmy.ml to c/homelab@lemmy.ml
 

Hi everyone :)

It's time to switch and give my home network a proper minimal hardware upgrade. Right now everything is managed by my ISP's AIO firewall/router combo. Which works okayish, but I'm already doing some firewall/dns/VPN stuff on my minimal spare laptop server to bypass most of my ISP's restrictions. So it's time to get a little bit "crazy" !

While I do have some "power user" knowledge regarding Linux/server/selfhosted services/networking, I'm a bit clueless hardware wise, specially regarding my ISP's 2.5G ethernet port.

I do have a 5giga connection from my Internet provider (Obtic fiber) which is divided into 4 ethernet ports (Eth1 2.5G, Eth2 1G, Eth3 1G, Eth4 0,500G or something in that range). And right now the Eth1 port is connected through an old 1G switch.

  1. To take full advantage of my ISP's 2.5G ethernet port do I need a router AND a switch capable of 2.5G througput ? Or only the router and the switch is going to divid it accordingly between all connected devices on a 1G switch?

I'm also looking for some recommendation/personal experience for a router and a switch with a budget of 250e.

First I was interested into a BananaPI as a router, to tinker a bit, but it seems a bit of a hassle to flash it with OpenWRT, then I found an interesting post on Lemmy talking about the Intel N100 Celeron N5105, which looks like more what I'm looking for but I'm not sure ?

  1. I have no idea what's the best bet, a SBC (bananapi mini, orange pi, raspberry pi...) a fully fleged router (like TP-Link AX1800 and flash it with opensense/openwrt) or an Intel N100 Celeron N5105 Soft Router ?

The capabilities I'm looking for:

  • VLAN capable
  • AP VLAN capabable to segment wifi
  • Taking advantage of my ISP's 2.5G ethernet port
  • Firewall customization capabilities

I have an eye on a managed switch I found on amazon (SODOLA 6 Port 2.5G Web Managed) but I have no idea how reliable they are, I have never heard of SODOLA.

  1. Any good recommendation I should look at for a managed switch that would work great with the same capabilities above?

  2. Probably last question, is regarding wifi APs. Is it possible to make an access point from my router even tough it hasn't atennas? If I connect an access point directly to my router, will it be capable of giving away wifi connection?

Thanks for reading though, I'm a bit unsure how I should spend my money to have a minimal but reliable/capable homelab setup. Every advice is welcome. But keep in mind, I want to keep it minimal, a good enough routing capbability with intermediate firewall customisation. I'm already hosting a few containers with a spare laptop and the traffic isn't going to be to crazy.

 

Hi everyone !

Right now I can't decide wich one is the most versatile and fit my personal needs, so I'm looking into your personal experience with each one of them, if you mind sharing your experience.

It's mostly for secure shared volumes containing ebooks and media storage/files on my home network. Adding some security into the mix even tough I actually don't need it (mostly for learning process).

More precisely how difficult is the NFS configuration with kerberos? Is it actually useful? Never used kerberos and have no idea how it works, so it's a very much new tech on my side.

I would really apreciate some indepth personal experience and why you would considere one over another !

Thank you !

0
submitted 4 months ago* (last edited 4 months ago) by N0x0n@lemmy.ml to c/linux@lemmy.ml
 

Hello !

Getting a bit annoyed with permission issues with samba and sshfs. If someone could give me some input on how to find an other more elegant and secure way to share a folder path owned by root, I would really appreciate it !

Context

  • The following folder path is owned by root (docker volume):

/var/lib/docker/volumes/syncthing_data/_data/folder

  • The child folders are owned by the user server

/var/lib/docker/volumes/syncthing_data/_data/folder

  • The user server is in the sudoers file
  • Server is in the docker groupe
  • fuse.confhas the user_allow_other uncommented

Mount point with sshfs

sudo sshfs server@10.0.0.100:/var/lib/docker/volumes/syncthing_data/_data/folder /home/user/folder -o allow_other

Permission denied

Things I tried

  • Adding other options like gid 0,27,1000 uid 0,27,1000 default_permissions...
  • Finding my way through stackoverflow, unix.stackexchange...

Solution I found

  1. Making a bind mount from the root owned path to a new path owned by server

sudo mount --bind /var/lib/docker/volumes/syncthing_data/_data/folder /home/server/folder

  1. Mount point with sshfs

sshfs server@10.0.0.100:/home/server/folder /home/user/folder

Question

While the above solution works, It overcomplicates my setup and adds an unecessary mount point to my laptop and fstab.

Isn't there a more elegant solution to work directly with the user server (which has root access) to mount the folder with sshfs directly even if the folder path is owned by root?

I mean the user has root access so something like:

sshfs server@10.0.0.100:/home/server/folder /home/user/folder -o allow_other should work even if the first part of the path is owned by root.

Changing owner/permission of the path recursively is out of question !

Thank you for your insights !

 

Hi everyone :)

For those interested, I share my just finished personal Firefox user.js. It's based on the latest arkenfox and has the same privacy features, with some personal tweaks to fit my workflow. And also easier to read πŸ˜….

https://github.com/KalyaSc/fictional-sniffle/blob/main/user.js


KEEP IN MIND

Except for the privacy focused entries, some are personal choices for an easy drop-in Firefox preferences backup. This is what I consider a good privacy model and some entries could break YOUR workflow, especially if you don't have self-hosted alternatives (Vaultwarden, Linkding, Wallabag).

I'm not an expert, but most of those entries are the same as Arkenfox's user.js. I really encourage you to read their file for better understanding on what each entrie does. While my file is easier to read, one downside is the lack of documentation for each entries.

Also, this is not just a COPY/PAST. It took a lot of effort, time, reading, testing and understanding. I kept a similar naming scheme for cross referencing.

I learned a few things and hope that you also will enjoy, edit, read and learn new interesting things.

Happy hardening !


Features

  • Automatic dark mode theme (Keep in mind you still need Dark Reader or similar plugin for web pages in dark mode.)
  • Deep clean history on every Firefox quit. Only cookies as exception are kept. I need them for my self hosted services.
  • Disable password/auto-fill/breache. Vaultwarden takes care of everything.
  • All telemetry disabled by default except for the crash reports. To also disable the crash reports, comment the begining of the following lines with //:
user_pref("breakpad.reportURL", "");
user_pref("browser.tabs.crashReporting.sendReport", false);
user_pref("browser.crashReports.unsubmittedCheck.enabled", false);
user_pref("browser.crashReports.unsubmittedCheck.autoSubmit2", false);
  • DoH disabled (got my personal VPN with DoH enabled)
user_pref("network.trr.mode", 5);
  • Disable WebRTC. If you need it for video calling, meetings, video chats:

Comment the following line:

user_pref("media.peerconnection.enabled", false);

Uncomment the following (arkenfox default, it will force WebRTC inside your configured proxy)

//user_pref("media.peerconnection.ice.default_address_only", true);
//user_pref("media.peerconnection.ice.proxy_only_if_behind_proxy", true);
  • FIxed Width and Height (1600x900) (Finger print resistant) arkenfox's default
  • Resist Fingerprinting (RFP) which overrides finger print protection (FPP)
  • Alot of other tweaks you can discover while reading through the file.

How to use/test this file ?

Open firefox, type about:profiles and create a test profile. Open the corresponding root folder, put in the user.js and launch profile in a new browser.

After testing and happy with the result, BACKUP your main Firefox profile somewhere safe and put the user.js in your main profile to see if it fits your workflow.

Room for improvement / TODO.

Alot of the settings in the 5000 range form arkenfox's user.js need further testing and investigation, because they could breake and cause performance/stability issues.

  • JS exploits:
- javascript.options.baselinejit
- javascript.options.ion
- javascript.options.wasm
- javascript.options.asmjs
  • Disable webAssembly
  • ...

TODO

  • Disable non-modern cipher suites
  • Control TLS versions
  • Disable SSL session IDs [FF36+]

Also those settings are another beast that needs further testing/investigation on how they work.

The user.js file

https://github.com/KalyaSc/fictional-sniffle/blob/main/user.js

WARNING

Arkenfox advise agianst addons who scramble and randomize your fingerprint characteristics (like chameleon).

WHY? Because resist fingerprint takes care of most things. See 4500: RFP (resistFingerprinting) in arkenfox user.js.

[WARNING] DO NOT USE extensions to alter RFP protected metrics

    418986 - limit window.screen & CSS media queries (FF41)
   1281949 - spoof screen orientation (FF50)
   1330890 - spoof timezone as UTC0 (FF55)
   1360039 - spoof navigator.hardwareConcurrency as 2 (FF55)
 FF56
   1333651 - spoof User Agent & Navigator API
      version: android version spoofed as ESR (FF119 or lower)
      OS: JS spoofed as Windows 10, OS 10.15, Android 10, or Linux | HTTP Headers spoofed as Windows or Android
   1369319 - disable device sensor API
   1369357 - disable site specific zoom
   1337161 - hide gamepads from content
....

Very long list !

Final words

I'm open for any constructive criticism or any constructive comment that could help me out to improve or understand something new or something I misunderstood. Sure that's not 100% my work, but as I said it took a lot of time, testing, searching, reading... Please don't be a crazy Panda...

Credits

https://github.com/arkenfox/user.js

https://github.com/pyllyukko/user.js/

https://wiki.archlinux.org/title/Firefox/Privacy

 

Hello again :)

I'm not talking about a broken wg connection, everything works as expect through the CLI and systemctl.

But the NetworkManger GUI in Gnome shows my Wireguard connection as it was "not connected" and when I click on the switch it actually disconnects my wg interface.

Also when I try to edit my connection through

nmcli connection modify wg0 connection.autoconnect yes

and restart my wireguard connection with

systemctl restart wg-quick@wg0

It recreates a new wireguard interface.

While everything works as expected with the usual tools (wg-quick, systemctl...) the GUI seems "broken".

Someone else noticed or is this somehow related to my setup?

Debian 12 bookworm
Gnome 
nmcli tools 1.42.4
 

Hi everyone :)

I'm slowly getting used on how to navigate and edit things in the terminal without leaving the keyboard and arrow keys. I'm getting faster and It improved my workflow in the terminal (Yeahhii).

ctrl + a e f b u k ...
alt + f b d ...

But yesterday I had such a bad experience while editing a backup bash script with nano. It took me like an hour to completely edit small changes like a caveman and always broke the editor when I used memory reflex terminal shortcuts.

This really pissed me... I know nano also has minimal/limited shortcuts but having to memorize and switch between different one for different purpose seems like a waste of time.

I think I tried emacs a few month ago but It didn't clicked. I didn't spend enough time though, tried it for a few minutes and deleted it afterwards. Maybe I should give it a second try?

I also gave Vim a try, but that session is still open and can't exit (πŸ˜‚ )! Vim seems rather to complex for my workflow, I'm just a self-taught poweruser making his way through linux. Am I wrong?

Isn't there something more "universal" ? That works everywhere I go the same? Something portable, so I can use it everywhere I go?

I'm very interested in everyone's thought, insight, personal experience and tip/tricks to avoid what happened yesterday !

Thanks !

view more: next β€Ί