Bug reports on any software

115 readers
1 users here now

When a bug tracker is inside the exclusive walled-gardens of MS Github or Gitlab.com, and you cannot or will not enter, where do you file your bug report? Here, of course. This is a refuge where you can report bugs that are otherwise unreportable due to technical or ethical constraints.

⚠of course there are no guarantees it will be seen by anyone relevant. Hopefully some kind souls will volunteer to proxy the reports.

founded 3 years ago
MODERATORS
26
 
 

The 112.be website drops all Tor traffic, which in itself is a shit show. No one should be excluded from access to emergency app info.

So this drives pro-privacy folks to visit http://web.archive.org/web/112.be/ but that just gets trapped in an endless loop of redirection.

Workaround: appending “en” breaks the loop. But that only works in this particular case. There are many redirection loops on archive.org and 112.be is just one example.

Why posted here: archive.org has their own bug tracker, but if you create an account on archive.org they will arbitrarily delete the account without notice or reason. I am not going to create a new account every time there is a new archive.org bug to report.

27
 
 

The cross-post mechanism has a limitation whereby you cannot simply enter a precise community to post to. Users are forced to search and select. When searching for “android” on infosec.pub within the cross-post page, the list of possible communities is totally clusterfucked with shitty centralized Cloudflare instances (lemmy world, sh itjust works, lemm ee, programming dev, etc). The list of these junk instances is so long !android@hilariouschaos.com does not make it to the list.

The workaround is of course to just create a new post with the same contents. And that is what I will do.

There are multiple bugs here:
① First of all, when a list of communities is given in this context, the centralized instances should be listed last (at best) because they are antithetical to fedi philosophy.
② Subscribed communities should be listed first, at the top
③ Users should always be able to name a community in its full form, e.g.:

  • [!android@hilariouschaos.com](/c/android@hilariouschaos.com)
  • hilariouschaos.com/android

④ Users should be able to name just the instance (e.g. hilariouschaos.com) and the search should populate with subscribed communities therein.

28
 
 

Tedious to use. No way to import a list of URLs to download. Must enter files one by one by hand.

No control over when it downloads. Starts immediately when there is an internet connection. This can be costly for people on measured rate internet connections. Stop and Go buttons needed. And it should start in a stopped state.

When entering a new file to the list, the previous file shows a bogus “error” status.

Error messages are printed simply as “Error”. No information.

There is an embedded browser. What for?

What files are already present the download directory because another app put them there, GigaGet lists those files with “100%”. How does GigaGet know those files that another app put there are complete when gigaget does not even have URL for them (thus no way to check the content-length)?

29
 
 

Navi is an app in f-droid to manage downloads. It’s really tedious to use because there is no way to import a list of URLs. You either have to tap out each URL one at a time, or you have to do a lot of copy-paste from a text file. Then it forces you to choose filename for each download -- it does not default to the name of the source file.

bug 1


For a lot files it gives:

Error: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.

The /details/ page for the broken download neglects to give the error message, much less what the error means.

bug 2


Broken downloads are listed under a tab named “completed”.

bug 3


Every failed fetch generates notification clutter that cannot be cleaned up. I have a dozen or so notifications of failed downloads. Tapping the notification results in no action and the notification is never cleared.

bug 4


With autostart and auto connect both disabled, Navi takes the liberty of making download attempts as soon as there is an internet connection.

bug 5?


A web browser is apparently built-in. Does it make sense to embed a web browser inside a download manager?

30
 
 

Images can be fully embedded inline directly in the HTML. Tor Browser displays them unconditionally, regardless of the permissions.default.image setting, which if set to “2” indicates images should not be loaded.

An example is demonstrated by the privacy-respecting search service called “dogs”:

If you search for a specific object like “sweet peppers”, embedded images appear in the results. This feature could easily be abused by advertisers. I’m surprised that it’s currently relatively rare.

It’s perhaps impossible to prevent embedded images from being fetched because the HTML standard does not include the length of the base64 blob ahead of it. Thus no way for the browser to know which position in the file to continue fetching from.

Nonetheless, the browser does not know /why/ the user disables images. Some people do it because they are on measured rate connections and need to keep their consumption low, like myself, and we are fucked in this case. But some people disable images just to keep garbage off the screen. In that case, the browser can (and should) respect their choice whether the images are embedded or not.

There should really be two config booleans:

  • fetch non-local images
  • render images that have been obtained The first controls whether the browser makes requests for images over the WAN. The second would just control whether the images are displayed.
31
 
 

I was trying to work out how I managed to waste so much of my bandwidth allowance in a short time. With a Lemmy profile page loaded, I hit control-r to refresh while looking at the bandwidth meter.

Over 1 meg! wtf. I have images disabled in my browser, so it should only be fetching a small amount of compressed text. For comparison, loading ~25 IRC channels with 200 line buffers is 0.1mb.

So what’s going on? Is Lemmy transferring thumbnails even though images are disabled in the browser config?

32
 
 

I simply wanted to submit a bug report. This is so fucked up. The process so far:

① solved a CAPTCHA just to reach a reg. form (I have image loading disabled but the graphical CAPTCHA puzzle displayed anyway (wtf Firefox?)
② disposable email address rejected (so Bitbucket can protect themselves from spam but other people cannot? #hypocrisy)
③ tried a forwarding acct instead of disposable (accepted)
③ another CAPTCHA, this time Google reCAPTCHA. I never solve these because it violates so many digital right principles and I boycott Google. But made an exception for this experiment. The puzzle was empty because I disable images (can’t afford the bandwidth). Exceptionally, I enable images and solve the piece of shit. Could not work out if a furry cylindrical blob sitting on a sofa was a “hat”, but managed to solve enough puzzles.
④ got the green checkmark ✓
⑤ clicked “sign up”
⑥ “We are having trouble verifying reCAPTCHA for this request. Please try again. If the problem persists, try another browser/device or reach out to Atlassian Support.”

Are you fucking kidding me?! Google probably profited from my CAPTCHA work before showing me the door. Should be illegal. Really folks, a backlash of some kind is needed. I have my vision and couldn’t get registered (from Tor). Imagine a blind Tor user.. or even a blind clearnet user going through this shit. I don’t think the first CAPTCHA to reach the form even had an audio option.

Shame on #Bitbucket!

⑦ attempted to e-mail the code author:

status=bounced (host $authors_own_mx_svr said: 550-host $my_ip is listed at combined.mail.abusix.zone (127.0.0.11); 550 see https://lookup.abusix.com/search?q=$my_ip (in reply to RCPT TO command))

#A11y #enshitification

33
1
submitted 5 months ago* (last edited 5 months ago) by coffeeClean@infosec.pub to c/bugs@sopuli.xyz
 
 

There used to be no problem archiving a Mastodon thread in the #internetArchive #waybackMachine. Now on recent threads it just shows a blank page:

https://web.archive.org/web/20240318210031/https://mastodon.social/@lrvick/112079059323905912

Or is it my browser? Does that page have content for others?

34
 
 

If you’re logged out and reading a thread, you should be able to login in another tab and then do a forced refresh (control-shift-R); and it should show the thread with logged-in control. For some reason the cookie isn’t being passed or (perhaps more likely) the cookie is insufficient because Lemmy is using some mechanism other than cookies.

Scenario 2:

You’re logged in and reading threads in multiple tabs. Then one tab becomes spontaneously logged out after you take some action. Sometimes a hard-refresh (control-shift-R) recovers, sometimes not. It’s unpredictable. But note that the logged-in state is preserved in other tabs. So if several hard refreshes fail, I have to close the tab and use another tab to navigate to where I was in the other tab. And it seems navigation is important.. if I just copy the URL for where I was (same as opening a new tab), it’s more likely to fail.

In any case, there are no absolutes.. the behavior is chaotic and could be related to this security bug.

35
 
 

People on a tight budget are limited to capped internet connections. So we disable images in our browser settings. Some environmentalists do the same to avoid energy waste. If we need to download a web-served file (image, PDF, or anything potentially large), we run this command:

$ curl -LI "$URL"

The HTTP headers should contain a content-length field. This enables us to know before we fetch something whether we can afford it. (Like seeing a price tag before buying something)

#Cloudflare has taken over at least ~20% of the web. It fucks us over in terms of digital rights in so many ways. And apparently it also makes the web less usable to poor people in two ways:

  • Cloudflare withholds content length information
  • Cloudflare blocks people behind CGNAT, which is commonly used in impoverished communities do to limited number of IPv4 addresses.
36
 
 

The problem:

  1. !cashless_society@nano.garden is created
  2. node A users subscribe and post
  3. node B users subscribe and post
  4. nano.garden disappears forever
  5. users on node A and B have no idea; they carry on posting to their local mirror of cashless_society.
  6. node C never federated with nano.garden before it was unplugged

So there are actually 3 bugs AFAICT:

  1. Transparency: users on nodes A and B get no indication that they are interacting with a ghost community.
  2. Broken comms: posts to the ghost community from node A are never sync’d, thus never seen by node B users; and vice-versa.
  3. Users on node C have no way to join the conversation because the search function only finds non-ghost communities.

The fix for ① is probably as simple as adding a field to the sidebar showing the timestamp of the last sync operation.

w.r.t. ②, presumably, A and B do not connect directly because they are each federated to the ghost node. So there is no way for node A posts to reach node B. Correct? Lemmy should be designed to accommodate a node disappearing at any time with no disruption to other nodes. Node A and B should directly sychronize.

w.r.t. ③ node C should still be able to join the conversation between A and B w.r.t the ghost community.

(original thread)

37
 
 

There are “announcement” communities where all posts are treated as announcements. This all-or-nothing blunt choice at the time of community creation could be more flexible. In principle, a community founder should have four choices:

  • all posts are announcements (only mods can post)
  • all posts are discussions
  • (new) all posts are announcements (anyone can post)
  • (new) authors choose at posting time whether their post is an announcement or a discussion

This would be particularly useful if an author cross-posts to multiple communities but prefers not to split the discussion. In which case the carbon copies could use the announcement option (or vice versa).

There is a side-effect here with pros and cons. This capability could be used for good by forcing a conversation to happen outside of a walled garden. E.g. you post to a small free-world instance then crosspost an “announcement” in a walled garden like sh.itjust.works, then the whole discussion takes place in the more socially responsible venue with open access. OTOH, the same capability in reverse could also be used detrimentally, e.g. by forcing a discussion onto the big centralized platforms.

update


Perhaps the community creator should get a more granular specification. E.g. a community creator might want:

Original posts → author’s choice

Cross-posts coming from [sh.itjust.works,lemmy.world] → discussions only

Cross-posts coming from [*] → author’s choice

38
 
 

A moderator deleted one of my posts for being off topic. I received no notification. It’s mere chance that I realized my post was silently removed, at which point I checked to modlog where a reason was given.

Users can filter sitewide modlogs on their own account to see the actions against them (great!) -- but there should also be a notification.

39
 
 

On an arbitrary gitea instance I opened the form to report a new bug. There was no way to tag the bug as a security bug, which should hide the bug from public view until project maintainers decide to release it.

And ironically, gitea has a dog food problem. That’s right, they use MS Github themselves. Hence why this is reported here. Codeberg has (or had at one point) a repo where gitea bugs could be reported, but Codeberg deleted my account and now there are some hurdles for new registrations that caused me issues. So here we are. IIRC gitea also has a demo instance where bugs can be reported. If I get around to it I might track that down and report this bug there.

40
 
 

After sending a DM, the profile lacks access to it. I can see my posts and public comments, but not my DMs. Thus there is also no way to read or edit DMs Lemmy users have sent.

update


As @viking@infosec.pub points out, sent messages are accessible in the ALL tab. Once my DMs are rendered, indeed there is an option to edit them just like a public message. But presumably due to another bug, Lemmy recipients are not likely notified of edits (untested).

41
1
submitted 6 months ago* (last edited 6 months ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz
 
 

I needed to DM a security bug to @LemmyDev@mastodon.social, but the Lemmy UI gives no way to freely compose a DM and manually enter an address. Users are expected to find a hyperlinked user click on it, and then click “send message”. The search functionality failed to find anything when I queried @LemmyDev@mastodon.social.

But the capability is there for advanced users who discover that they can click on the user of an external account and then mimic the URL format to manually enter an account.

#lemmyBug

42
 
 

I think I was refreshing my profile or notifications page (forget which). As it was loading for ~1—2 seconds my screen color theme changed and in the top right corner I saw someone else’s userID, then it quickly reverted back to my theme and userID.

As fast as it happened I only took mental note of the first half of the other userID, which happened to match that of the admin. I described the colors I saw in that 1—2 second timeframe to the admin who confirmed it was indeed the color theme they configured for their environment (which differs from the default).

I clearly had the admin’s session for a second or two. It was so quick that a malicious user probably could not do anything malicious. But of course just as I have no idea how I apparently got the admin’s cookie for a second or two, I have no idea how I got back my cookie. Maybe if I had quickly hit ESC mid-loading the access breach could have been sustained.

#lemmyBug


As usual, this bug report is posted here because the official bug tracker is jailed in MS Github. I should add that Microsoft supports those responsible for the death of Hind Rajab by financing AnyVision, which is good cause to boycott Microsoft.

43
 
 

This post was composed with a link to a Wired article:

https://lemmy.ohaa.xyz/post/1939209

Then in a separate step, the article was edited and an image was uploaded. The URL of the local image unexpectedly replaced the URL of the article. Luckily I noticed the problem before losing track of the article URL.

44
 
 

I’m very grateful that #AnonymousOverflow exists and was already in place to give us refuge when #Stackexchange et al returned to #Cloudflare’s jail. I use this search service because it automatically integrates (SE→AO) replacement:

https://search.fabiomanganiello.com/search

A search led to this thread:

https://overflow.manganiello.tech/exchange/tex/questions/225027/how-to-create-new-font-which-is-thicker-version-of-computer-modern

The three links in the itemized links all point to Stackexchange, which puts the exclusion problem back in our face -- for those who are blocked from Cloudflare. Anonymous Overflow (AO) should eat its own #dogFood. Like the fabiomanganiello search service, AO should replace SE links with AO links within SE pages.

Yes, it may be a bit tricky because AO has a number of instances which go up and down. The onion ones are quite flaky. In principle, SE links should be replaced with the same instance the article is viewed on.

This #bug is posted here because the bug tracker is exclusively on MS #Github:

https://github.com/httpjamesm/AnonymousOverflow

45
 
 

Normally it’s possible to import comments into the Sopuli timeline by querying on URLs of external comments that are not yet local. Thereafter, it’s possible to interact with imported msgs.

But when I linked to an external comment (https://jlai.lu/comment/5309447) in this thread, and then later queried the URL of the comment, the search stops upon finding my own local mention of that comment. If the search feature is going to stop upon finding local results, then there needs to be a “go deeper” button, or an “import” button to give a means to import a comment.

As a consequence of this bug, I cannot reply to https://jlai.lu/comment/5309447 from Sopuli.

Also notable is if the search category is narrowed from “ALL” to “URL”, nothing results.

#LemmyBug

46
 
 

I posted this thread on jlai.lu. I got no replies as far as I could see from sopuli -- no notifications, and when I enter that thread there are still zero replies. But when I visit the thread on the hosting instance, I see a reply. This behavior is the same as if I were blocking that community -- but I am not.

When I search in sopuli for the direct link to the comment, the search finds it. And then I was able to forcibly interact with the comment.

I have to wonder how often someone replies to me and I have no idea because the response is hidden from me. This is a serious bug. Wholly unacceptable for a platform designed specifically for communication.

update 1 (another occurrence)


Here’s another thread with the same issue. Zero replies when I visit that thread mirror within sopuli, but 3 replies when visiting direct. I was disappointed that high-effort post got no replies. Now 2 months later I see there actually were replies. I will search those comment URLs perhaps in a couple days to interact. But I’ll hold off in case someone wants to investigate (because I think the act of searching those URLs results in copying the comments which could interfere with the investigation).

update 2 (subscription relevancy)


I was asked if I am subscribed to the community. Good question! The answer is no, so there’s a clue. Perhaps mentions do not trigger notifications if no one on the instance of the mentioned account is subscribed to the community. This could be the root cause of the bug.

#LemmyBug

47
 
 

Kensanata’s mastodon-archive tool was originally working as expected to archive posts from eattherich.club. Then out of the blue one day it started printing this:

Loading existing archive: eattherich.club.user.bob.json
Get user info
Get new statusesTraceback (most recent call last):
00] STDIN                                           File "/usr/lib/python3/dist-packages/mastodon-archive/mastodon-archive.py", line 5, in <module>
▏
Seen 10 duplicates, stopping now.
Use --no-stopping to prevent this.
Added a total of 25 new items
Get new favourites    mastodon_archive.main()
                                               File "/usr/lib/python3/dist-packages/mastodon-archive/mastodon_archive/__init__.py", line 333, in main
             args.command(args)
                                 File "/usr/lib/python3/dist-packages/mastodon-archive/mastodon_archive/archive.py", line 148, in archive
▏
Seen 10 duplicates, stopping now.
Use --no-stopping to prevent this.
Added a total of 7 new items
Get bookmarks (this may take a while)    bookmarks = mastodon.bookmarks()
                                                                           File "<decorator-gen-59>", line 2, in bookmarks
                                                                                                                            File "/usr/lib/python3/dist-packages/mastodon/Mastodon.py", line 96, in wrapper
                                                                   raise MastodonVersionError("Version check failed (Need version " + version + ")")
        mastodon.Mastodon.MastodonVersionError: Version check failed (Need version 3.1.0)

The count of new items never resets to zero. It should go back to zero after every fetch, so this implies fetching no longer occurs (or at least it no longer finishes). The current version of eattherich.club is Mastdoon v4.2.5. Not sure if a version upgrade would be related. Other Mastodon instances do not have this issue.

The bug tracker is on MS Github, thus out of reach for me:

https://github.com/kensanata/mastodon-archive/issues

48
 
 

The flagship instance for Matrix demonstrates the use of Cloudflare, which was found to be necessary to defend against DoS attacks. This CaaC (Cloudflare-as-a-Crutch) design has many pitfalls & problems, including but not limited to:

  • digital exclusion (Cloudflare is a walled garden that excludes some groups of people)
  • supports a privacy hostile tech giant
  • adds to growth and dominance of an oppressive force
  • exposes metadata to a privacy offender without the knowledge and consent of participants
  • reflects negatively on the competence, integrity, and digital rights values of Matrix creators
  • creates a needless dependency on a tech giant

#CaaC needs to be replaced with a #securityByDesign approach. Countermeasures need to be baked into the system, not bolted on. The protocol should support mechanisms such as:

  • rate limiting/tar pitting
  • proof-of-work with variable levels of work and a prioritization of traffic that’s proportional to the level of work, which can be enabled on demand and generally upon crossing a load threshold.
  • security cookie tokens to prioritize traffic of trusted participants

Sadly, #Matrix is aligned with another nefarious tech giant, and has jailed its project in Microsoft Github. And worse, they have a complex process for filing bugs/enhancements against the spec:

https://github.com/matrix-org/matrix-spec-proposals/blob/main/README.md

Hence why this bug report is posted here.

49
 
 

When the posts of a malicious user are deleted in bulk, each comment that is removed is replaced with “removed by mod”. This implies that the mod of that particular community removed that particular comment. The modlog of that channel does not account for the removal, so there is no traceability or way for users to check whether they have an overly ambitious mod.

When a bulk removal is performed, the replaced comment should be more transparent. It should say “comment removed in bulk cleanup of malicious user’s content” and it should link to whatever modlog might capture it.

50
 
 

cross-posted from: https://slrpnk.net/post/6002564

I wanted to see the image size for this post before deciding to download the image. Normally curl -i returns a “content-length”, but not in this case:

$ curl -i 'https://slrpnk.net/pictrs/image/67127e8e-52ef-42ad-bf39-424b9052ef90.webp'
HTTP/2 200 
server: nginx
date: Mon, 22 Jan 2024 09:56:56 GMT
content-type: image/webp
last-modified: Sat, 20 Jan 2024 16:24:40 GMT
vary: Origin, Access-Control-Request-Method, Access-Control-Request-Headers
access-control-expose-headers: last-modified, content-type, accept-ranges, date, cache-control, transfer-encoding
cache-control: public, max-age=604800, immutable
accept-ranges: bytes
strict-transport-security: max-age=63072000
referrer-policy: same-origin
x-content-type-options: nosniff
x-frame-options: DENY
x-xss-protection: 1; mode=block

Seems like a bug, no?

view more: ‹ prev next ›