talkingpumpkin

joined 1 year ago
[–] talkingpumpkin@lemmy.world 1 points 3 days ago

Yes, XML is different than JSON and YAML, but it's not particularly easier or harder to manually read/edit than JSON or YAML are (IMO the are all a pain, each in its own way).

If you want to look at it from the programmer's side (which is not what OP was talking about)... marshalling/unmarshalling has been a solved issue for at least 20yrs now :) just have a library do it for you (do map json/yaml properties to you objects manually?).

You don't need to worry about attributes/child elements: <person name="jack" /> and <person><name>jack</name></person> will work the same (ok, this may depend on what language/library you pick - the lib I used back in the day worked either way).

If anything, the issue with XML is all the unnecessarily complicated stuff they added to its "core" (eg. CDATA, namespaces, non-standalone documents, ...) and all the unnecessarily complicated technologies/standards they developed around XML (from Xinclude to SOAP and many others)... but just ignore that BS (like the rest of the world does) and you'll mostly be fine :)

[–] talkingpumpkin@lemmy.world 0 points 5 days ago (3 children)

Yaml is fundamentally the same as the json and xml it has mostly replaced (and the toml that didn't manage to replace yaml)... it's a data serialization format and just doesn't have any facility for making abstractions, which are the main tool we human use to deal with complexity.

[–] talkingpumpkin@lemmy.world 3 points 5 days ago

Java have had very bad press lately (since the log4j fiasco I guess? maybe since before).

IDK why people blame Java for any issues with any library/project written in it... it's as dumb as blaming C/C++ for all the windows fuckups, and nobody blames php for the various cpanel vulnerabilities or python for all the shit people write in it.

[–] talkingpumpkin@lemmy.world 8 points 5 days ago (3 children)

Best of luck to you!

I’m trying to understand Git, but it’s a giant conceptual leap.

Git is not that different from svn (I mean, the biggest hurdle is going from a shared folder to any version control system)... I'd say the main difference is that branches live in a different namespace than files (ie. you don't have trunk/src/whatever but just src/whatever in the main branch). On top of that there's that commit and push are two different things (and the same with fetch and checkout) and that merges are way easier than in svn (where you had to merge stuff manually).

If you create a repo locally and clone it twice in two different directories, you can easily simulate what would happen when you and a coworker collaborate via a centralized repo (say, github) - do a few experiments and you'll see it's not as complicated as it seems (I'd recommend using the CLI instead of some GUI client: it's way easier to figure things out without the overhead of learning to differentiate between git concepts and how the GUI tries to help).

[–] talkingpumpkin@lemmy.world 1 points 1 week ago

Personally, I would sell everything and get a used PC on ebay (a small "minipc" one, unless space for hard disks is needed).

Take a look at what you could buy on ebay just by selling off the nvidia card.

[–] talkingpumpkin@lemmy.world 2 points 2 weeks ago

why is your network like this?

Well, at the moment my network is actually flat :)

This is an experiment I'm doing because I wanted to have all the management stuff on a different subnet (eg. adguard dns is on the "regular" subnet everyone uses, but its web interface is on the special subnet only select devices can talk to).

Of course (like with most stuff in my homelab), it's not like I really have a super-compelling security reason to that, it's mostly that I wondered "what if?" :D

Oh. the ping option you are referring to is -I (upper case) and takes either an interface name or an ip. I did try giving a .10/24 IP to the PC and the results were consistent with scenario 1 (pings where source and destination are on the same subnet work, pings acrrss subnets don't), so I didn't mention that in the OP

[–] talkingpumpkin@lemmy.world 1 points 2 weeks ago

I don't think I quite explained the situation well enough: my server only has 1 ethernet port (same as my PC), otherwise I wouldn't have bothered with vlans (well, I would still have bothered, since my house still only has one "backbone" cable running through it, but I would have configured it on the switches only).

Anyway... a few of the things you say/imply go against my understanding of networking, so one of us would better go back RTFM as you suggest :) (just kidding - most probably I just don't understand what you mean)

[–] talkingpumpkin@lemmy.world 1 points 2 weeks ago (1 children)

Thanks! Forwarding is disabled. I don't want the server to steal the router's job :)

[–] talkingpumpkin@lemmy.world 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

So the request goes trough but the replies are discarded ? That could actually be it!

I think there was an option to allow that... I'll search it and give it a try. Thanks!

[–] talkingpumpkin@lemmy.world 2 points 2 weeks ago

I tried dropping the default routes (one at a time) and it doesn't make a difference, which isn't (I think) surprising as all traffic is local as far as the server in scenario 1 is concerned. Also IIUC only the default gateway with the lowest metric actually counts.

 

I have two subnets and am experiencing some pretty weird (to me) behaviour - could you help me understand what's going on?


Scenario 1

PC:                        192.168.11.101/24
Server: 192.168.10.102/24, 192.168.11.102/24

From my PC I can connect to .11.102, but not to .10.102:

ping -c 10 192.168.11.102 # works fine
ping -c 10 192.168.10.102 # 100% packet loss

Scenario 2

Now, if I disable .11.102 on the server (ip link set <dev> down) so that it only has an ip on the .10 subnet, the previously failing ping works fine.

PC:                        192.168.11.101/24
Server: 192.168.10.102/24

From my PC:

ping -c 10 192.168.10.102 # now works fine

This is baffling to me... any idea why it might be?


Here's some additional information:

  • The two subnets are on different vlans (.10/24 is untagged and .11/24 is tagged 11).

  • The PC and Server are connected to the same managed switch, which however does nothing "strange" (it just leaves tags as they are on all ports).

  • The router is connected to the aformentioned switch and set to forward packets between the two subnets (I'm pretty sure how I've configured it so, plus IIUC the second scenario ping wouldn't work without forwarding).

  • The router also has the same vlan setup, and I can ping both .10.1 and .11.1 with no issue in both scenarios 1 and 2.

  • In case it may matter, machine 1 has the following routes, setup by networkmanager from dhcp:

default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.101 metric 410
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.101 metric 410
  • In case it may matter, Machine 2 uses systemd-networkd and the routes generated from DHCP are slightly different (after dropping the .11.102 address for scenario 2, of course the relevant routes disappear):
default via 192.168.10.1 dev eth0 proto dhcp              src 192.168.10.102 metric 100
192.168.10.0/24          dev eth0 proto kernel scope link src 192.168.10.102 metric 100
192.168.10.1             dev eth0 proto dhcp   scope link src 192.168.10.102 metric 100
default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.102 metric 101
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.102 metric 101
192.168.11.1             dev eth1 proto dhcp   scope link src 192.168.11.102 metric 101

solution

(please do comment if something here is wrong or needs clarifications - hopefully someone will find this discussion in the future and find it useful)

In scenario 1, packets from the PC to the server are routed through .11.1.

Since the server also has an .11/24 address, packets from the server to the PC (including replies) are not routed and instead just sent directly over ethernet.

Since the PC does not expect replies from a different machine that the one it contacted, they are discarded on arrival.

The solution to this (if one still thinks the whole thing is a good idea), is to route traffic originating from the server and directed to .11/24 via the router.

This could be accomplished with ip route del 192.168.11.0/24, which would however break connectivity with .11/24 adresses (similar reason as above: incoming traffic would not be routed but replies would)...

The more general solution (which, IDK, may still have drawbacks?) is to setup a secondary routing table:

echo 50 mytable >> /etc/iproute2/rt_tables # this defines the routing table
                                           # (see "ip rule" and "ip route show table <table>")
ip rule add from 192.168.10/24 iif lo table mytable priority 1 # "iff lo" selects only 
                                                               # packets originating
                                                               # from the machine itself
ip route add default via 192.168.10.1 dev eth0 table mytable # "dev eth0" is the interface
                                                             # with the .10/24 address,
                                                             # and might be superfluous

Now, in my mind, that should break connectivity with .10/24 addresses just like ip route del above, but in practice it does not seem to (if I remember I'll come back and explain why after studying some more)

 

I want to have a local mirror/proxy for some repos I'm using.

The idea is having something I can point my reads to so that I'm free to migrate my upstream repositories whenever I want and also so that my stuff doesn't stop working if some of the jankiest third-party repos I use disappears.

I know the various forjego/gitea/gitlab/... (well, at least some of them - I didn't check the specifics) have pull mirroring, but I'm looking for something simpler... ideally something with a single config file where I list what to mirror and how often to update and which then allows anonymous read access over the network.

Does anything come to mind?

[–] talkingpumpkin@lemmy.world 6 points 4 weeks ago (1 children)

If going the route of a backup solution, is it feasible to install OpenWRT on all of my devices, with the expectation that I can do some sort of automated backups of all settings and configurations, and restore in case of a router dying?

My two cents: use a "full" computer as your router (with either something like OPNsense or any "regular" linux distro if you don't need the GUI) and OpenWRT on your access points.

Unless you use the GUI and backup/restore the configuration (as you would with proprietary firmwares), OpenWRT is frankly a pain to configure and deploy. At the moment I'm building custom images for all my devices, but (next time™) I'm gonna ditch all that, get an x86 router and just manually manage OpenWRT on my wifi APs (I only have two and they both have the same relatively straightforward config).

It’s a pain that I know can be solved with buying dedicated access points (…right?)

Routers and access points are just computers with network interfaces (there may be level-2-only APs, but honestly I've never heard of any)... most probably your issue is that the firmware of your "routers as access points" doesn't want to be configured as a dumb AP.

[–] talkingpumpkin@lemmy.world 4 points 1 month ago (1 children)

Does it still? Looks like the bubble is about to explode

view more: next ›