this post was submitted on 08 Jul 2024
823 points (96.8% liked)

Science Memes

10885 readers
4032 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] match@pawb.social 2 points 3 months ago (2 children)

Tensorflow has some libraries that help visualize the "explanation" for why it's models are classifying something (in the tutorial example, a fireboat), in the form of highlights over the most salient parts of the data:

Image of a firefighting boat, then a purple lattice overlaying the boat.

Neural networks are not intractable, but we just haven't built the libraries for understanding and explaining them yet.

[–] AnarchistArtificer@slrpnk.net 5 points 3 months ago (1 children)

I'd have to check my bookmarks when I get home for a link, but I recently read a paper linked to this that floored me. It was research on visualisation of AI models and involved subject matter experts using an AI model as a tool in their field. Some of the conclusions the models made were wrong, and the goal of the study was to see how good various ways of visualising the models were — the logic being that better visualisations = easier for subject matter experts to spot flaws in the model's conclusions instead of having to blindly trust it.

What they actually found was that the visualisations made the experts less likely to catch errors made by the models. This surprised the researchers, and caused them to have to re-evaluate their entire research goal. On reflection, they concluded that what seemed to be happening was that the better the model appeared to explain itself through interactive visualisations, the more likely the experts were to blindly trust the model.

I found this fascinating because my field is currently biochemistry, but I'm doing more bioinformatics and data infrastructure stuff as time goes on, and I feel like my research direction is leading me towards the explainable/interpretable AI sphere. I think I broadly agree with your last sentence, but what I find cool is that some of the "libraries" we are yet to build are more of the human variety i.e. humans figuring out how to understand and use AI tools. It's why I enjoy coming at AI from the science angle, because many scientists alreadyuse machine learning tools without any care or understanding of how they work (and have done for years), whereas a lot of stuff branded AI nowadays seems like a solution in search of a problem.

[–] match@pawb.social 2 points 3 months ago (1 children)

please let us know if you find the article, it sounds fascinating!!

[–] AnarchistArtificer@slrpnk.net 3 points 3 months ago (1 children)

I got you.

Link to a blog post by the paper's author that discusses the paper (it has many links to interesting stuff. I was skeptical of it when I first found it, given that the one line TL;DR of the paper is "black-boxing is good actually", but it thoroughly challenged my beliefs): https://scatter.wordpress.com/2022/02/16/guest-post-black-boxes-and-wishful-intelligibility/

Link to a SciDB version of the academic paper (SciHub is dead, long live SciDB): https://annas-archive.gs/scidb/10.1086/715222

(DiMarco M. Wishful Intelligibility, Black Boxes, and Epidemiological Explanation. Philosophy of Science. 2021;88(5):824-834. doi:10.1086/715222)

[–] match@pawb.social 2 points 3 months ago
[–] 0ops@lemm.ee 2 points 3 months ago

Wow that is sick