9point6

joined 1 year ago
[–] 9point6@lemmy.world 13 points 8 hours ago* (last edited 8 hours ago)

It would be almost as recklessly stupid as cancelling it to not bring it back.

The west coast main line is at capacity today and it's only gonna get worse. This infrastructure was needed 10 years ago and it's at the point now where we're gonna have to push more and more freight onto roads

[–] 9point6@lemmy.world 1 points 9 hours ago (1 children)

FWIW I don't think you just lose bandwidth with longer cables, but rather it just loses sync and the cable stops working.

I have a thunderbolt 4 hub that I wanted to tuck away somewhere, so I tried several longer cables (some usb4 as you said, one actual longer TB cable) none of them worked reliably like the 30cm cable that came with it. It would sporadically lose connection for a couple of seconds before reconnecting in most cases or it just wouldn't connect at all.

Not sure if my hub or cable choices are the problem or if it really does just have to be as short as possible in some cases.

[–] 9point6@lemmy.world 22 points 17 hours ago (7 children)

Yeah it can have wildly different meanings depending on the circumstances in which it's said. It can be "well we can't change it, may as well get on with life" all the way to "well this discussion is not gonna change anything, let's get on with fixing it". Very similar, but polar opposite sentiments.

[–] 9point6@lemmy.world 9 points 17 hours ago (2 children)

I'm fine with a brioche bun, they're upper tier buns

But don't stack a load of stuff in there to the point the brioche disintegrates under the heft of it

That's why we have the pretzel bun

(Likewise, don't use a pretzel bun if you're not gonna load it up, that bun needs grease for balance)

[–] 9point6@lemmy.world 7 points 20 hours ago

Every company I've worked at for at least the last decade or so has an internal social media thing of varying quality.

Facebook even wraps up its own product for internal use.

Admittedly engineering always generally ignores it and we just use slack

[–] 9point6@lemmy.world 2 points 1 day ago* (last edited 22 hours ago)

No need to get aggravated, I completely grasp it, you've possibly misunderstood or not entirely read my comment if that's your takeaway.

I'm not talking about server code specifically, I'm going through the stages between the source code repo(s) and what your browser ends up receiving when you request a site.

NodeJS is relevant here because it's what runs nearly all major JS bundlers (webpack, vite, etc), which are what produces the code that ultimately runs in the browser for most websites you use. Essentially in a mathematical sense, the full set of dependencies for that process are a part of the input to the function that outputs the JS bundle(s).

I'm not really sure what you mean with that last part, really, anyone hosting something on the internet has to care about that stuff, not just businesses. GDPR can target individuals just as easily as for-profit companies, it's about the safety of the data, not who has it—I'm assuming you would not want to go personally bankrupt due to a deliberate neglect of security? Similarly, if you have a website that doesn't hit the performance NFRs that search engines set, no one will ever find it in search results because it'll be down on page 100. You will not be visiting websites which don't care about this stuff.

Either way, all of that is wider reasoning for the main point which we're getting away from a bit, so I'll try to summarise as best I can:

Basically unless you intend your idea to only work on entirely open source websites (which comprise a tiny percentage of the web), you're going to have to contend with these JS bundles, which as I've gone into, is basically an insurmountable task due to not having the complete set of inputs.

If you do only intend it to work with those completely open source websites, then crack on, I guess. There's still what looks to me like a crazy amount of things to figure out in order to create a filter that won't be able to work with nearly all web traffic, but if that's still worth it to you, then don't let me convince you otherwise.

Edit: typo

[–] 9point6@lemmy.world 7 points 1 day ago (3 children)

My view on this is that Ofcom fucked it on this long ago really and the horse has already bolted

We should have gone with an openreach style model for the infrastructure rather than doling out exclusive rights to chunks of spectrum in an entirely uneven manner.

This model can't really sustain more than a few companies because, using this as an example: three has a fantastic 3G network and the best 5G network, however they have no 2G network and got shafted on 4G spectrum. Vodafone has almost a polar opposite of the best 2G coverage (still useful for very remote customers) and 4G coverage comparable to EE.

The only way for these two companies to cover the patches in their service and complete with the market leader effectively is a merger, which is how EE came to exist in the first place.

I'm not sure I buy the pricing-people-out angle either tbh, we have a pretty rich market of MVNOs who act as an anchor on the MNO pricing, and it would look like anti-competitive market collision if suddenly the operating costs for these companies went up after a merger.

[–] 9point6@lemmy.world 4 points 1 day ago (3 children)

First I don't even grasp what a "service owner" is.

The people who build & run the software & servers that serve the website, who amongst other things have an interest in keeping the service available, secure, performant, etc.

Particularly with laws like GDPR, these services owners are motivated to be as secure as practically possible otherwise they could receive a bankrupting fine should they end up leaking someone's data. You'll never be able to convince anyone to lower the security of their threat model for that reason alone, before anything else.

there are already a bunch of app (web, android) that are open-source and secured.

The code published and the code running on a server cannot be treated as equivalent for several reasons, but here's two big ones:

Firstly, there's the similar issue as with compiled binaries in other languages: it's tough (or impossible) to verify that the code published is the same code that's running. Secondly the bundled and minified versions of websites are rarely published anyway, at most you get the constituent code and a dependency list for something completely open source. This is the bit I referred to before as trying to untoast bread, the browser gets a bundle that can't practically be reversed back into that list of parts and dependencies in a general purpose way. You'd need the whole picture to be able to do any kind of filtering here.

who is the attacker here?

The only possible attacker is not the website itself (though it's a lot more limited if the site implements CSP & SRI, as mentioned in my other comment). XSS is a whole category of attacks which leverage an otherwise trusted site to do something malicious, this is one of the main reasons you would run something like noscript.

There have also been several instances in recent years of people contributing to (or outright taking over) previously trusted open source projects and sneaking in something malicious. This then gets executed and/or bundled during development in anything that uses it and updates to the compromised version before people find the vulnerability.

Finally there are network level attacks which thankfully are a lot less common these days due to HTTPS adoption (and to be a broken record, CSP & SRI), but if you happen to use public WiFi, there's a whole heap of ways a malicious actor can mess with what your browser ultimately loads.

[–] 9point6@lemmy.world 5 points 1 day ago (5 children)

Maybe I have missed your point, but based on how I've understood what you've described I think you may have also missed mine, I was more pointing out how the practicalities prevent such a tool from being possible from a few perspectives. I lead with security just because that would be the deal breaker for many service owners, it's simply infosec best practice to not leak the information such a tool would require.

Your filtering idea would require cooperation from those service owners to change what they're currently doing, right?

Perhaps I've completely got the wrong end of the stick with what you're suggesting though, happy to be corrected

[–] 9point6@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (7 children)

Publishing lock files of running services would be a big security risk for the service owner as it gives an easily parsable way for an attacker to check if your bundle includes any package versions with vulnerabilities.

You then also have tools like snyk used by many big organisations which has the ability to patch dependencies before the actual dependency published the patch themselves. This would lead to a version not corresponding with the bundled code.

In fact given bundling is pretty ubiquitous, but infinitely configurable at this point, even validating the integrity of the bundle Vs the versions in a lock file is a problem that will be hard to achieve. It's kinda like wanting to untoast bread.

Also given many JS projects have a lock file which describes both the deficiencies of the front end bundle, server & build tooling, there is a risk of leaking information about that too (it's best practice to make as little as possible about your server configuration publicly viewable)

IMO, the solution to this problem today is to use a modern, updated browser that sandboxes execution, run a adblocker with appropriate trusted blocklists for what you're avoiding, try to only use sites you trust & if you can, push web developers to use CSP & SRI to prevent malicious actors from injecting code into their sites without them knowing. Many sites already take advantage of these features, so if you trust the owner, you should be able to trust the code running on the page. If you don't trust the owner with client side JS, you probably shouldn't trust them with whatever they're running on the server side too.

[–] 9point6@lemmy.world -3 points 2 days ago (5 children)

Elon and his companies seem to have a propensity for preferring proprietary bullshit over standards

12
submitted 1 month ago* (last edited 1 month ago) by 9point6@lemmy.world to c/casualconversation@lemm.ee
 

Honestly, I will never wrap my head around how people can happily bring infants on any flight where you can expect people to try and sleep, it's incredibly lucky if they don't spend some of it screaming their heads off—I would be mortified if my choices were preventing hundreds of people from sleeping. But I'm not going to rant too hard about that.

Why on earth hasn't any airline started marketing adult-only flights?

It seems like a complete no brainer to me, I would choose it every time and pay extra for it.

Disclaimer: I may or may not be on a 36h day with only an hour of sleep right now

1
submitted 11 months ago* (last edited 11 months ago) by 9point6@lemmy.world to c/indiegaming@lemmy.world
 

I've just started Return to the Obra Dinn, so far really liking the art style and the main game mechanics. I'm interested to see how the story unfolds as it seems to be taking a "memento" style reverse chronological approach to telling it.

Also still playing Halls of Torment as ever since Vampire Survivors, one of these top down roguelite shoot-em-up games has been in my rotation.

Oh and I nearly forgot, also started pizza tower, but only dipped my toe into that one so far. Really enjoy the art and platforming mechanics so far.

view more: next ›