70
What’s Really Going On in Machine Learning? Some Minimal Models | Stephen Wolfram | August 22, 2024
(writings.stephenwolfram.com)
A community for posting things related to machine learning
Icon base by Lorc under CC BY 3.0 with modifications to add a gradient
I don't know a lot about AI or machine learning so I'd take what I say with a grain of salt. I do know a lot about computers, though. I'm just spit balling here.
This is kinda the reason why I think this "AI" hype is a joke. I get the idea behind it, but a computer is only as smart as the user. Or in this case the data that it soaks up. And as advanced as they are they are mostly still just a novelty save for very specific purposes. The whole idea of a black box in machine learning is just inefficient and wasteful. The fact that we have no idea how these AI's achieve their output is a big problem and a huge waste of resources. In a basic sense, if you put 2+2 into a calculator it will give an output of 2. If you put 2+2-(3x9-18)+7 into a calculator it will give you an output of 2. If all you see is the result you will have no idea how much processing power is being wasted on unnecessary processes. As long as we keep shoving information into these things without thinking about what we put into them they will only get more wasteful with unnecessary data. I know they add certain parameters and weights to negate things like this. But there's no way in hell they've accounted for even 1% of what would be needed.
Don't get me wrong, I understand the practicality of using machine learning. I just think the way we are building it from the ground up is too simple for what we are trying to achieve at this point. I honestly think we are reaching a plateau with this kind of machine learning. We need more parameterization if we want it to get better.
Yeah...
I mean I literally warned you before you read it. Maybe instead of being passive aggressive you could help educate or correct people on the topic instead of treating them like they're an idiot. I'm more than open to criticism about this topic and I'm just learning as I go.
Yeah fair enough. That was a bit mean, sorry.
I think the main issue is that ML isn't useless just because we don't understand how it works. It clearly does work and we can use that. It's hardly unique in that way either. There are a gazillion medicines that work but we don't really know how. We're not going to abandon them just because we don't understand them.
And it's not like people aren't trying to understand how they work; it's just really difficult.
The calculator analogy also makes no sense. You can't build a working speech recognition engine by manually entering equations for phonemes or whatever. That's actually not a million miles away from how speech recognition worked in the 90s and 2000s... or I should say "didn't work".
Dude, we all saw your anti-woke meltdown. Nobody is taking what you say seriously.
You mean these totally reasonable comments that almost everyone upvoted? lol ok.
Ah yeah that must be why everyone except you upvoted my comment... Come on dude. This isn't even on topic.
Feel free to say anything on-topic. Right now you're in Reddit mode: nothing to contribute, but really eager to put words into the box. This is a Wolfram article; you could be on-topic with as little as "lol wolfram sux".
Err... Why are you criticising me for going off-topic when it was literally you that did it? Weird.