Bug reports on any software

114 readers
1 users here now

When a bug tracker is inside the exclusive walled-gardens of MS Github or Gitlab.com, and you cannot or will not enter, where do you file your bug report? Here, of course. This is a refuge where you can report bugs that are otherwise unreportable due to technical or ethical constraints.

⚠of course there are no guarantees it will be seen by anyone relevant. Hopefully some kind souls will volunteer to proxy the reports.

founded 3 years ago
MODERATORS
1
2
 
 

An important part of the Youtube content is the transcript at the bottom of the video description. There are some 3rd-party sites that collect and share the YT transcripts separately but then the naive admins put the service in Cloudflare’s walled garden, which is worse than YT itself and purpose-defeating to a large extent. (exceptionally this service is CF-free, but it says “Transcript is disabled on this video” in my test: https://youtubetranscript.io)

Invidious should be picking up the slack here.

And Lemmy could do better by automatically fetching the transcript of youtube/invidious links and include it, perhaps spoiler style like this.

3
2
submitted 2 weeks ago* (last edited 2 weeks ago) by activistPnk@slrpnk.net to c/bugs@sopuli.xyz
 
 

I browse with images disabled. But sometimes I encounter a post where I want to see the image, like this one:

https://iejideks5zu2v3zuthaxu5zz6m5o2j7vmbd24wh6dnuiyl7c6rfkcryd.onion/@JosephMeyer@c.im/112923392848232303

When opening that link in a browser configured to fetch images, it redirects to the original instance, which is inside an access-restricted walled garden. This seems like a new behaviour for Mastodon thus may be a regression.

It’s a terrible design because it needlessly forces people on open decentralised networks into centralised walled gardens. The behaviour arises out of the incorrect assumption that everyone has equal access. As Cloudflare proves, access equality is non-existent. The perversion in this particular case is an onion is redirecting to Cloudflare (an adversary to all those who have onion access).

There should be two separate links to each post: one to the source node, and one to the mirror. This kind of automatic redirect is detrimental. Lemmy demonstrates the better approach of giving two links and not redirecting. (But Lemmy has that problem of not mirroring images).

4
 
 

There are some very slow nodes (like Beehaw) where the server is apparently so overworked it cannot render a login form most of the time. The browser times out waiting. In the rare moments that there is a login opportunity, about ½ the time the login fails with a 2 second popup saying “incorrect login credentials”.

It’s quite terrible because obviously users would assume their account has been deleted


because that’s how most online services work. Admins do not generally give warnings or say why an account is deleted. They just hit the delete button. Like Marvin in Office Space who was not told he was laid off.. they just “fixed the payroll glitch”. This is generally how communication works on communication platforms.. admins just pull the plug.

So because of how people learn that their account is deleted, users cannot distinguish a purposeful account removal from a faulty server. If you have a Beehaw account and you are told “incorrect login credentials”, don’t believe it. Keep trying. Eventually you’ll get in.

5
 
 

In the stock Lemmy web client there is apparently no mechanism for users to fetch their history of posts. The settings page gives only a way to download settings. This contrasts with Mastodon where users can grab an archive of everything they have posted which is still stored on the server.

Or am I missing something?

IIUC, there is no GDPR issue here because no data is personal (because all Lemmy accounts are anonymous). But if a Lemmy server were to hypothetically require users to identify themselves with first+last name, then the admin would have a substantial manual burden to comply with GDPR Art.20 requests. Correct?

6
 
 

These environment variables designate a parameter that holds the value of a HTTP proxy:

  • http_proxy
  • https_proxy
  • HTTP_PROXY
  • HTTPS_PROXY

It’s a convention, but the name “HTTP proxy” can only imply HTTP proxy, not a SOCKS proxy. The golang¹ standard libraries expect the above HTTP proxy parameters to specify a SOCKS proxy. How embarrassing is that? So any Go app that offers a proxy feature replicates getting the proxy kind backwards. Such as hydroxide, which requires passing a SOCKS proxy as a HTTP proxy.

¹ “Go” is such a shitty unsearchable name for a language. It’s no surprise that the developers of the language infra itself struggle with the nuances of natural language. HTTP≠SOCKS. And IIUC, this language is a product of Google. WTF. It’s the kind of amateurish screwup you would expect to come from some teenager’s mom’s basement, not a fortune 500 company among the world’s biggest tech giants.

(edit)
It’s a bit amusing and simultaneously disasappointing that reporting bugs and suggesting enhancements to Google’s language requires using Microsoft’s platform:

https://github.com/golang/proposal#the-proposal-process

FOSS developers: plz avoid Golang - it’s a shit show.

7
 
 

Lingva & Simply Translate are two different front-ends to Google Translate. I’m not running the software myself because I run Argos locally (for privacy), but when Argos gives a really bad translation I resort to Lingva and Simply Translate instances.

I tried to translate a privacy policy. Results:

Lingva instances:

  • translate.plausibility.cloud ← goes to lunch
  • lingva.lunar.icu ← gives “414 Request-URI Too Large”
  • lingva.ml & lingva.garudalinux.org ← fuck off Cloudflare! Obviously foolishly purpose defeating to surreptitiously expose people to CF who are trying to avoid direct Google connections.
  • translate.igna.wtf ← dead
  • translate.dr460nf1r3.org ← dead

Simply Translate instances (list of instances broken for me but found a year-old mirror of that):

  • simplytranslate.org ← just gives a blank
  • st.tokhmi.xyz ← up but results are just CSS garbage
  • translate.bus-hit.me (ST fork mozhi) ← shoots a blank result
  • simplytranslate.pussthecat.org ← redirects to mozhi.pussthecat.org
  • mozhi.pussthecat.org (ST fork mozhi) ← shoots a blank result
  • translate.projectsegfau.lt (ST fork mozhi) ←translates the first word then drops the rest; this instance is incorrectly listed as Lingva
  • translate.northboot.xyz ← up but results are just CSS garbage
  • st.privacydev.net ← up but results are just CSS garbage
  • tl.vern.cc ← up but results are just CSS garbage

~~It looks as if Simply Translate is not keeping up with Google API changes.~~ (edit: actually the CSS garbage is what we get when feeding it bulky input -- those instances work on small input)

graveyard of dead sites:

  • simplytranslate.manerakai.com ← redirects to vacated site
  • translate.josias.dev
  • translate.riverside.rocks
  • translate.tiekoetter.com
  • simplytranslate.esmailelbob.xyz
  • translate.slipfox.xyz
  • translate.priv.pw
  • st.odyssey346.dev
  • fyng2tsmzmvxmojzbbwmfnsn2lrcyftf4cw6rk5j2v2huliazud3fjid.onion
  • xxtbwyb5z5bdvy2f6l2yquu5qilgkjeewno4qfknvb3lkg3nmoklitid.onion
  • translate.prnoid54e44a4bduq5due64jkk7wcnkxcp5kv3juncm7veptjcqudgyd.onion
  • simplytranslate.esmail5pdn24shtvieloeedh7ehz3nrwcdivnfhfcedl7gf4kwddhkqd.onion
  • tl.vernccvbvyi5qhfzyqengccj7lkove6bjot2xhh5kajhwvidqafczrad.onion
  • st.g4c3eya4clenolymqbpgwz3q3tawoxw56yhzk4vugqrl6dtu3ejvhjid.onion

Why this is a bug


Frond-ends and proxies exist to circumvent the anti-features of the service they are facilitating access to. So if there is a volume limitation, the front-end should be smart enough to split the content into pieces, translate the pieces separately, and reassemble. In fact that should be done anyway for privacy, to disassociate pieces of text from each other.

Alternatively (and probably better), would be to have a front-end for the front-ends. Something that gives a different paragraph to several different Lingva/ST instances and reassembles the results. This would (perhaps?) link a different IP to each piece assuming the front-ends also proxy (not sure if that’s the case).

8
 
 

cross-posted from: https://slrpnk.net/post/11375008

Whoever designed the OSM db either never uses ATM machines or they have never experienced anything like the ATM disaster in Netherlands. The OSM db has most ATM brands incorrect for Netherlands and seriously needs more fields so travelers can actually find a functioning ATM.

brands are mostly incorrect

Pick any Dutch city. Search » Categories » custom search » Finance » ATM. The brands are mostly misinfo. These ATM brands do not exist anywhere in Netherlands:

  • Rabobank
  • ABN AMRO
  • Ing
  • SNS

All those banks removed all their ATM machines and joined a monopolistic consortium called “Geldmaat”. There is generally an ATM at those locations but it’s always a Geldmaat ATM. So a simple find and replace is needed on all the Dutch maps.

For indoor ATMs, the brand is often incorrectly named after the shop it’s in. That’s useful for finding it but still missing important info: the actual ATM brand. ATM brand is very important because different ATM brands give differing degrees of shitty treatment. If brand X refuses your card, all instances of that ATM brand will likely refuse your card. So the “brand” field should always reflect the ATM operator. Having a separate shop name field would be useful for locating the machine.

missing key attributes

Travelers should not have to spend hours running from one ATM to another until they find one that works. There are lots of basic variables that need to be accounted for in the db:

  • (real or fixed point) ATM fee
  • (enum set) currencies other than local (a rare but very useful option is to e.g. pull out GBP or USD in the eurozone)
  • (enum set) card networks supported (visa, amex, discover, maestro, etc)
  • (enum set) UI languages supported
  • (integer) transaction limit for domestic cards
  • (integer) transaction limit for foreign cards
  • (integer set) denominations in the machine (Netherlands quietly removed all banknotes >€50 from all ATMs IIUC)
  • (boolean) whether customers can control the denominations
  • (boolean) indoor/outdoor (if the txn limit field is empty, indoor machines often have higher limits)
    • (string) hours of operation (if indoor)
    • (string) name of shop the ATM is inside (if indoor)
  • (enum) whether a balance check is supported: [no | only some cards | any card]; this feature is non-existent in Belgium but common in Netherlands. Note that some ATMs only give balance on their own cards.
    • (enum) whether the balance is on screen or printed to the receipt, or both
  • (boolean) insertion style -- whether the card is sucked into the machine (this is very important because if the card is sucked in by a motor there is a real risk that the machine keeps the card [yes, that’s deliberate]). Motorised insertion is more reliable but carries the risk of confiscation. Manual insertion can be fussy and take many tries to get it to read the card but you never have to worry about confiscation.
  • (boolean) dynamic currency conversion (DCC)
  • (boolean) whether there is an earphone port for blind people (not sure if that’s always there)
9
 
 

In the Lemmy web client it used to be possible to open a new tab (control-tab) which would naturally be logged in. That goes for most websites. With Lemmy it started getting flakey (sometimes works, sometimes not). Lately it’s working less often and it seems browser flavor is a factor. Tor Browser (FF) generally works, but Ungoogled Chromium new tabs are logged out. So in UC, I have to do everything for a Lemmy instance under one tab.

I wonder what kind of funny business causes session cookies to fail. My guess is they are not using session cookies for logins but rather one of the rare alternatives.

update


With just one tab running, I did a hard refresh (control-shift-R). That logged me out presumably doing the same as getting a new tab. Using the /back/ button does not recover from this.

10
1
submitted 1 month ago* (last edited 1 month ago) by debanqued@beehaw.org to c/bugs@sopuli.xyz
 
 

I installed the Aria2 app from f-droid. I just want to take a list of URLs of files to download and feed it to something that does the work. That’s what Aria2c does on the PC. The phone app is a strange beast and it’s poorly described & documented. When I launch it, it requires creating a profile. This profile wants an address. It’s alienating as fuck. I have a long list of URLs to fetch, not just one. In digging around, I see sparse vague mention of an “Aria server”. I don’t have an aria server and don’t want one. Is the address it demands under the “connection” tab supposed to lead to a server?

The readme.md is useless:

https://github.com/devgianlu/Aria2App

The app points to this link which has no navigation chain:

https://github.com/devgianlu/Aria2App/wiki/Create-a-profile

Following the link at the bottom of the page superfically seems like it could have useful info:

“To understand how DirectDownload work and how to set it up go here.”

but clicking /here/ leads to a dead page. I believe the correct link is this one. But on that page, this so-called “direct download” is not direct in the slightest. It talks about setting up a server and running python scripts. WTF.. why do I need a server? I don’t want a server. I want a direct download in the true sense of the word direct.

11
 
 

If fedi node A and node B both have an anti-spam rule, it makes good sense that when a moderator removes a post for spam that it would be removed from both nodes. But what about other cases? Lemmy is a bit blunt and nuance-lacking in this regard.

For example, the parent of this thread was censored despite not breaking any rules. More importantly, it breaks no rules on slrpnk.net. Yet the slrpnk version was also removed.

I’m not sure exactly what the fix is. But in principle an author should be able to ask a slrpnk admin to restore the post in the slrpnk version of that community, so long as no slrpnk rules are broken by the post.

It’s one thing for various nodes to federate based on having compatible side-wide rules, but they aren’t necessarily aligned 100% and there are also rogue moderators who apply a different set of rules than what’s prescribed for a community.

12
 
 

If you long-tap an image that someone sent, options are:

  • share with…
  • copy original URL
  • delete image

The URL is not the local URL, it’s the network URL for fetching the image again. When you send outbound images, Snikket stores them in one place, but it’s nowhere near the place where it stores inbound images. I found it once after a lengthy hunt but did not take notes. I cannot find it now. I think it’s well buried somewhere. What a piece of shit.

13
 
 

Those who condemn centralised social media naturally block these nodes:

  • #LemmyWorld
  • #shItjustWorks
  • #LemmyCA
  • #programmingDev
  • #LemmyOne
  • #LemmEE
  • #LemmyZip

The global timeline is the landing page on Mbin nodes. It’s swamped with posts from communities hosted in the above shitty centralised nodes, which break interoperability for all demographics that Cloudflare Inc. marginalises.

Mbin gives a way for users to block specific magazines (Lemmy communities), but no way to block a whole node. So users face this this very tedious task of blocking hundreds of magazines which is effectively like a game of whack-a-mole. Whenever someone else on the Mbin node subscribes to a CF/centralised node, the global timeline gets polluted with exclusive content and potentially many other users have to find the block button.

Secondary problem: (unblocking)
My blocked list now contains hundreds of magazines spanning several pages. What if LemmEE decides one day to join the decentralised free world? I would likely want to stop blocking all communities on that node. But unblocking is also very tedious because you have to visit every blocked magazine and click “unblock”.

the fix


① Nix the global timeline. Lemmy also lacks whole-node blocking at the user level, but Lemmy avoids this problem by not even having a global timeline. Logged-in users see a timeline that’s populated only with communities they subscribe to.

«OR»

② Enable users to specify a list of nodes for which they want filtered out of their view of the global timeline.

14
 
 

While composing this post the Lemmy web client went to lunch. This is the classic behaviour of Lemmy when it has a problem. No error, just infinite spinner. After experimentation, it turns out that it tries to be smart but fails when treating URLs written with the gemini:// scheme.

(edit) It’s probably trying to visit the link for that convenience feature of pre-filling the title. If it does not recognise the scheme, it should just accept it without trying to be fancy. It likely screws up on other schemes as well, like dict, ftp, news, etc.

The workaround is to embed the #Gemini link in the body of the post.

15
 
 

I think the stock Lemmy client stops you from closing a browser tab if you have an editor open on a message, to protect you from accidental data loss.

Mbin does not.

16
 
 

A vast majority of the fediverse (particularly the threadiverse) is populated by people who have no sense of infosec or privacy, who run stock browsers over clearnet (e.g. #LemmyWorld users, the AOL users of today). They have a different reality than street wise people. They post a link to a page that renders fine in the world they see and they are totally oblivious to the fact that they are sending the rest of the fediverse into an exclusive walled garden.

There is no practical way for street wise audiences to signal “this article is exclusive/shitty/paywalled/etc”. Voting is too blunt of an instrument and does not convey the problem. Writing a comment “this article is unreachable/discriminatory because it is hosted in a shitty place” is high effort and overly verbose.

the fix


The status quo:

  • (👍/👎) ← no meaning.. different people vote on their own invented basis for voting

We need refined categorised voting. e.g.

  • linked content is interesting and civil (👍/👎)
  • body content is interesting and civil (👍/👎)
  • linked article is reachable & inclusive (👎)¹
  • linked is garbage free (no ads, popups, CAPTCHA, cookie walls, etc) (👍/👎)

¹ Indeed a thumbs up is not useful on inclusiveness because we know every webpage is reachable to someone or some group and likely a majority. Only the count of people excluded is worth having because we would not want to convey the idea that a high number of people being able to reach a site in any way justifies marginalization of others. It should just be a raw count of people who are excluded. A server can work out from the other 3 voting categories the extent by which others can access a page.

From there, how the votes are used can evolve. A client can be configured to not show an egalitarian user exclusive articles. An author at least becomes aware that a site is not good from a digital rights standpoint, and can dig further if they want.

update


The fix needs to expand. We need a mechanism for people to suggest alternative replacement links, and those links should also be voted on. When a replacement link is more favorable than the original link, it should float to the top and become the most likely link for people to visit.

17
1
submitted 2 months ago* (last edited 2 months ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz
 
 

Some will regard this as an enhancement request. To each his own, but IMO *grep has always had a huge deficiency when processing natural languages due to line breaks. PDFGREP especially because most PDF docs carry a payload of natural language.

If I need to search for “the.orange.menace“ (dots are 1-char wildcards), of course I want to be told of cases like this:

A court whereby no one is above the law found the orange  
menace guilty on 34 counts of fraud..

When processing a natural language a sentence terminator is almost always a more sensible boundary. There’s probably no command older than grep that’s still in use today. So it’s bizarre that it has not evolved much. In the 90s there was a Lexis Nexus search tool which was far superior for natural language queries. E.g. (IIRC):

  • foo w/s bar :: matches if “foo” appears within the same sentence as “bar”
  • foo w/4 bar :: matches if “foo” appears within four words of “bar”
  • foo pre/5 bar :: matches if “foo” appears before “bar”, within five words
  • foo w/p bar :: matches if “foo” appears within the same paragraph as “bar”

Newlines as record separators are probably sensible for all things other than natural language. But for natural language grep is a hack.

18
 
 

I cannot believe how stupid Chromium is considering it’s the king of browsers from a US tech giant. It’s another bug that should be embarrassing for Google.

If you visit a PDF, it fetches the PDF and launches pdf.js as expected. If you use the download button within pdf.js, you would expect it to simply copy the already fetched PDF from the cache to the download folder. But no.. the stupid thing goes out on the WAN and redownloads the whole document from the beginning.

I always suspected this, but it became obvious when I recently fetched a 20mb PDF from a slow server. It struggled for a while to get the whole thing just for viewing. Then after clicking to download within pdf.js, it was crawling again from 1% progress.

What a stupid waste of bandwidth, energy and time.

19
 
 

cross-posted from: https://sopuli.xyz/post/12858874

When an image is posted by someone on a Cloudflared instance like the following:

  • #LemmyWorld
  • #ShitJustworks
  • #LemmyCA
  • #LemmyEE
  • #LemmyZip
  • #LemmyOne

the image is inaccessible to all demographics of people who Cloudflare discriminates against because images are not mirrored to federated nodes.

We expect corporations to not give a shit about marginising people who are not profitable enough to care about. But when naive asshole users outnumber progressive egalitarians, it highlights a problem with the fedi, which still lacks the tooling needed to keep oppression at bay.

The six listed nodes above effectively host the AOL users of our time. Lacking the sophistication needed to detect and grasp situations of eroded digital rights with a degree of blindness and lack of concern for centralised corporate control.

Suggestions needed for Lemmy nodes that are defederated from the above listed six.

20
 
 

Different apps expect passwords in the .netrc file to be quoted in different ways. E.g. fetchmail expects passwords to be quoted in a bash style way (quotes needed if there are special chars, but quotes themselves need quotes), while cURL gives no special meaning to quotes and takes them literally if present.

Who to blame for this is a bit unclear, but I believe the original purpose of .netrc was for the standard CLI FTP program, so in principle everything should be aligned on that, IMO.

Some apps will complain if they spot a .netrc syntax they don’t like, as if they get to decide that -- even if the line it complains about is not the record the app is looking for. OTOH, it’s useful to know what an app accepts and rejects.

What a mess.

21
 
 

Updating my browser apparently caused extensions to get updated as well. Now uMatrix 1.1.2 is installed. The config box is very small compared to the size available to the browser window area. You have to scroll horizontally to reach the columns on the right, and the name of the 3rd party entity scrolls out of the window. This makes it inconvenient and cumbersome to alter the settings.

I suppose this change was motivated by complaints that the config window was too large on small screens:

https://github.com/gorhill/uMatrix/issues/483
https://github.com/gorhill/uMatrix/issues/683

22
 
 
  • broken: Ungoogled Chromium ver. 90.0.4430.212-1.sid1
  • works: Ungoogled Chromium ver. 112.0.5615.165-1

If anyone has problems getting Ungoogled Chromium (and likely Google’s Chromium as well) to work on Lemmy, notice the versions above. The Lemmy webclient is a dysfunctional disaster in the old version but they fixed whatever the problem was in recent versions.

23
 
 

I installed #neonmodem by simply grabbing the tarball, which expands files directly into the $CWD instead of nesting them in a folder named after the app. Not a big deal but it gave a slight hint that this project might have quality issues.

This command executes just fine:

$ torsocks neonmodem connect --type lemmy --url https://sopuli.xyz

It’s irritating that it does not inform the user where the data is being stored and it’s also undocumented. You have to guess how to use it and it’s misleading (I think the connect command does not actually result in a connection being made, it apparently just stores the login creds).

Simply running it crashes instantly:

$ torsocks neonmodem
  panic: Error(s) loading system(s)

  goroutine 1 [running]:
  github.com/mrusme/neonmodem/cmd.glob..func1(0x1771140?, {0xe973eb?, 0x0?, 0x0?})
          /home/runner/work/neonmodem/neonmodem/cmd/root.go:128 +0x268
  github.com/spf13/cobra.(*Command).execute(0x1771140, {0xc00008c1f0, 0x0, 0x0})
          /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:944 +0x847
  github.com/spf13/cobra.(*Command).ExecuteC(0x1771140)
          /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3bd
  github.com/spf13/cobra.(*Command).Execute(...)
          /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
  github.com/mrusme/neonmodem/cmd.Execute(0xc0000061a0?)
          /home/runner/work/neonmodem/neonmodem/cmd/root.go:141 +0x3e
  main.main()
          /home/runner/work/neonmodem/neonmodem/neonmodem.go:13 +0x25
24
 
 

The 112.be website drops all Tor traffic, which in itself is a shit show. No one should be excluded from access to emergency app info.

So this drives pro-privacy folks to visit http://web.archive.org/web/112.be/ but that just gets trapped in an endless loop of redirection.

Workaround: appending “en” breaks the loop. But that only works in this particular case. There are many redirection loops on archive.org and 112.be is just one example.

Why posted here: archive.org has their own bug tracker, but if you create an account on archive.org they will arbitrarily delete the account without notice or reason. I am not going to create a new account every time there is a new archive.org bug to report.

25
 
 

The cross-post mechanism has a limitation whereby you cannot simply enter a precise community to post to. Users are forced to search and select. When searching for “android” on infosec.pub within the cross-post page, the list of possible communities is totally clusterfucked with shitty centralized Cloudflare instances (lemmy world, sh itjust works, lemm ee, programming dev, etc). The list of these junk instances is so long !android@hilariouschaos.com does not make it to the list.

The workaround is of course to just create a new post with the same contents. And that is what I will do.

There are multiple bugs here:
① First of all, when a list of communities is given in this context, the centralized instances should be listed last (at best) because they are antithetical to fedi philosophy.
② Subscribed communities should be listed first, at the top
③ Users should always be able to name a community in its full form, e.g.:

  • [!android@hilariouschaos.com](/c/android@hilariouschaos.com)
  • hilariouschaos.com/android

④ Users should be able to name just the instance (e.g. hilariouschaos.com) and the search should populate with subscribed communities therein.

view more: next ›