simplymath

joined 7 months ago
[–] simplymath@lemmy.world 38 points 4 days ago

COVID research made generic sequencing for viruses and bacteria incredibly cheap. You can run a PCR test for most things now for $10 (USD) or less. This opens a whole world of highly specific diagnostics and cheap, hyper-personalized treatments.

Also, MRNA vaccines are being tested for several other diseases and it seems very promising.

[–] simplymath@lemmy.world 1 points 5 days ago

and my point was explaining that that work has likely been done because the paper I linked was 20 years old and they talk about the deep connection between "similarity" and "compresses well". I bet if you read the paper, you'd see exactly why I chose to share it-- particularly the equations that define NID and NCD.

The difference between "seeing how well similar images compress" and figuring out "which of these images are similar" is the quantized, classficiation step which is trivial compared to doing the distance comparison across all samples with all other samples. My point was that this distance measure (using compressors to measure similarity) has been published for at least 20 years and that you should probably google "normalized compression distance" before spending any time implementing stuff, since it's very much been done before.

[–] simplymath@lemmy.world 2 points 6 days ago* (last edited 5 days ago)

I think there's probably a difference between an intro to computer science course and the PhD level papers that discuss the ability of machines to learn and decide, but my experience in this is limited to my PhD in the topic.

And, no, textbooks are often not peer reviewed in the same way and generally written by graduate students. They have mistakes in them all the time. Or grand statements taken out of context. Or are simplified explanations because introducing the nuances of PAC-learnability to somebody who doesn't understand a "for" loop is probably not very productive.

I came here to share some interesting material from my PhD research topic and you're calling me an asshole. It sounds like you did not have a wonderful day and I'm sorry for that.

Did you try learning about how computers learn things and make decisions? It's pretty neat

[–] simplymath@lemmy.world 1 points 6 days ago* (last edited 5 days ago) (2 children)

You seem very upset, so I hate to inform you that neither one of those are peer reviewed sources and that they are simplifying things.

"Learning" is definitely something a machine can do and then they can use that experience to coordinate actions based on data that is inaccesible to the programmer. If that's not "making a decision", then we aren't speaking the same language. Call it what you want and argue with the entire published field or AI, I guess. That's certainly an option, but generally I find it useful for words to mean things without getting too pedantic.

[–] simplymath@lemmy.world 0 points 6 days ago* (last edited 6 days ago)

Yeah. I understand. But first you have to cluster your images so you know which ones are similar and can then do the deduplication. This would be a powerful way to do that. It's just expensive compared to other clustering algorithms.

My point in linking the paper is that "the probe" you suggested is a 20 year old metric that is well understood. Using normalized compression distance as a measure of Kolmogorov Complexity is what the linked paper is about. You don't need to spend time showing similar images will compress more than dissimilar ones. The compression length is itself a measure of similarity.

[–] simplymath@lemmy.world 2 points 6 days ago* (last edited 6 days ago) (2 children)

Yeah. That's what an MP4 does, but I was just saying that first you have to figure out which images are "close enough" to encode this way.

[–] simplymath@lemmy.world 1 points 6 days ago (4 children)

Then it should be easy to find peer reviewed sources that support that claim.

I found it incredibly easy to find countless articles suggesting that your Boolean is false. Weird hill to die on. Have a good day.

https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=computer+decision+fairness&oq=computer+decison

[–] simplymath@lemmy.world 1 points 6 days ago (6 children)

Agree to disagree. Something makes a decision about how to classify the images and it's certainly not the person writing 10 lines of code. I'd be interested in having a good faith discussion, but repeating a personal opinion isn't really that. I suspect this is more of a metaphysics argument than anything and I don't really care to spend more time on it.

I hope you have a wonderful day, even if we disagree.

[–] simplymath@lemmy.world 1 points 6 days ago (8 children)

computers make decisions all the time. For example, how to route my packets from my instance to your instance. Classification functions are well understood in computer science in general, and, while stochastic, can be constructed to be arbitrarily precise.

https://en.wikipedia.org/wiki/Probably_approximately_correct_learning?wprov=sfla1

Human facial detection has been at 99% accuracy since the 90s and OPs task I'd likely a lot easier since we can exploit time and location proximity data and know in advance that 10 pictures taken of Alice or Bob at one single party are probably a lot less variant than 10 pictures taken in different contexts over many years.

What OP is asking to do isn't at all impossible-- I'm just not sure you'll save any money on power and GPU time compared to buying another HDD.

[–] simplymath@lemmy.world 0 points 6 days ago

Definitely PhD.

It's very much an ongoing and under explored area of the field.

One of the biggest machine learning conferences is actually hosting a workshop on the relationship between compression and machine learning (because it's very deep). https://neurips.cc/virtual/2024/workshop/84753

[–] simplymath@lemmy.world 1 points 6 days ago (6 children)

Compressed length is already known to be a powerful metric for classification tasks, but requires polynomial time to do the classification. As much as I hate to admit it, you're better off using a neural network because they work in linear time, or figuring out how to apply the kernel trick to the metric outlined in this paper.

a formal paper on using compression length as a measure of similarity: https://arxiv.org/pdf/cs/0111054

a blog post on this topic, applied to image classification:

https://jakobs.dev/solving-mnist-with-gzip/

[–] simplymath@lemmy.world 1 points 1 week ago

By no means the best option, but the tikz latex package works and pandoc can handle the conversion to your preferred format. I would limit this to very simple diagrams.

 
1
submitted 4 months ago* (last edited 4 months ago) by simplymath@lemmy.world to c/backpacking@lemmy.ml
 

Scandinavia often has these three-walled cabins available on a first-come, first-served basis. In Swedish, they're called vindskydd, or wind shelter. This particular one is northeast of Umeå, Sweden. No guarantees on what they're called elsewhere, but I have seen them in Finland as well. And I have heard of but not seen of them in Norway. In general, the freedom to roam is quite strong in these three countries as long as you are respectful and stay out of obviously private spaces like personal gardens or farm fields. Happy travels!

 

https://timesofmalta.com/article/camping-on-comino-these-are-the-rules-for-the-tal-ful-camping-site.961601

Camping opportunities are relatively rare in Europe, but this island in Malta has cheap camping. on the other inhabited islands (Malta and Good), public transit, restrooms, and wifi are plentiful and local food is extremely cheap. You can get local tfira for a couple euros or a passtizzi filled with peas or cheese for even less. With an ultralight pack, all of Gozo is walkable, though the island of Malta is split by a largely impassable highway. I'd recommend the bus for €2.50.

 

near Mixta Cave

view more: next ›