this post was submitted on 05 Sep 2024
40 points (100.0% liked)

Selfhosted

39937 readers
340 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

So, I'm selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I'm wondering if there's any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

you are viewing a single comment's thread
view the rest of the comments
[–] smpl@discuss.tchncs.de 2 points 2 months ago (7 children)

The first thing I would do writing such a paper would be to test current compression algorithms by create a collage of the similar images and see how that compares to the size of the indiviual images.

[–] simplymath@lemmy.world 1 points 2 months ago (6 children)

Compressed length is already known to be a powerful metric for classification tasks, but requires polynomial time to do the classification. As much as I hate to admit it, you're better off using a neural network because they work in linear time, or figuring out how to apply the kernel trick to the metric outlined in this paper.

a formal paper on using compression length as a measure of similarity: https://arxiv.org/pdf/cs/0111054

a blog post on this topic, applied to image classification:

https://jakobs.dev/solving-mnist-with-gzip/

[–] smpl@discuss.tchncs.de 1 points 2 months ago (5 children)

I was not talking about classification. What I was talking about was a simple probe at how well a collage of similar images compares in compressed size to the images individually. The hypothesis is that a compression codec would compress images with similar colordistribution in a spritesheet better than if it encode each image individually. I don't know, the savings might be neglible, but I'd assume that there was something to gain at least for some compression codecs. I doubt doing deduplication post compression has much to gain.

I think you're overthinking the classification task. These images are very similar and I think comparing the color distribution would be adequate. It would of course be interesting to compare the different methods :)

[–] simplymath@lemmy.world 0 points 2 months ago* (last edited 2 months ago)

Yeah. I understand. But first you have to cluster your images so you know which ones are similar and can then do the deduplication. This would be a powerful way to do that. It's just expensive compared to other clustering algorithms.

My point in linking the paper is that "the probe" you suggested is a 20 year old metric that is well understood. Using normalized compression distance as a measure of Kolmogorov Complexity is what the linked paper is about. You don't need to spend time showing similar images will compress more than dissimilar ones. The compression length is itself a measure of similarity.

load more comments (4 replies)
load more comments (4 replies)
load more comments (4 replies)