this post was submitted on 02 Aug 2024
1455 points (98.3% liked)

Science Memes

10271 readers
2504 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.


Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] yesman@lemmy.world 59 points 1 month ago (21 children)

The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI's methods are bullshit. Under no circumstance should we accept a "black box" explanation.

[–] CheesyFox@lemmy.sdf.org 23 points 1 month ago (2 children)

good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It's like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

Under no circumstance should we accept a "black box" explanation.

Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

[–] thecodeboss@lemmy.world 13 points 1 month ago

Don't worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s

[–] petrol_sniff_king@lemmy.blahaj.zone 4 points 1 month ago (2 children)

Hey look, this took me like 5 minutes to find.

Censius guide to AI interpretability tools

Here's a good thing to wonder: if you don't know how you're black box model works, how do you know it isn't racist?

Here's what looks like a university paper on interpretability tools:

As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.

Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn't get you in trouble with the EU.

Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.

Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind's finer pleasures, but this attitude of yours is profoundly stupid. It's weak. You don't want to know? It doesn't make you curious? Why are you comfortable not knowing things? That's not how science is propelled forward.

[–] Tja@programming.dev 5 points 1 month ago (1 children)

"Enough" is doing a fucking ton of heavy lifting there. You cannot explain a terabyte of floating point numbers. Same way you cannot guarantee a specific doctor or MRI technician isn't racist.

[–] match@pawb.social 3 points 1 month ago

interpretability costs money though :v

load more comments (18 replies)