thickertoofan

joined 1 month ago
MODERATOR OF
[–] thickertoofan@lemm.ee 4 points 2 weeks ago* (last edited 2 weeks ago)

i'm not the smartest out there to explain it but it's like ...instead of floating point numbers as the weights, its just -1,0,1.

[–] thickertoofan@lemm.ee 3 points 2 weeks ago

it was, it's just that they have officially released a 2B model trained for the BitNet architecture

 

Let's go! Lossless CPU inference

[–] thickertoofan@lemm.ee 1 points 2 weeks ago (1 children)

I've worked on this topic a lot, did it once last year and this year being the above update. Also, just pushed major update to the website for a cool thing: https://dcda-v2.vercel.app/ please check it out again! Well the thing is, I really don't have the motivation to work on this because this requires a large community effort to gather a meaningful count of data, and also from ML perspective, is it worth the effort? Like you'd have to take in the complexity of the hindi language itself, suppose i train the model to include the maatras, still would a model be able to identify two characters side by side conjoined by the line with the maatras? I mean if someone convinces me that this kind of dataset would have VERY much value in terms of contribution to digitization of the language and its ecosystem, and if it proves to be extremely useful for future researchers, then sure I'm down to work on it. And the implementation I'm thinking of is really really easy to implement, and we would not have to sit for hours writing samples on our own. We can distribute the task to the crowd but my idea of data collection would be getting people in person to write a few letters on a piece of paper and using cv to crop them out from the marked rectangles. I'm dumbing down the explanation but yeah it would require CV and markers. I can even collect data from the web app itself but not many people would chip in. I'm not exceptionally famous or have a huge following where I can get thousands of inputs in a few days/weeks/months. With the network I have, it would maybe take years to get meaningful variety of data, and im talking about the base characters without maatras.

sorry for large rant but yeah, i'm really not motivated to work on this but I do have the idea/ plan. I'd love to hand the torch to some newcomer or an enthusiast in ML to do it or someone who's more into it than me right now.

[–] thickertoofan@lemm.ee 1 points 2 weeks ago (3 children)

thanks a lot! I think, not only the joint letters but the diacritics is so diverse, and it is a shame that we don't have any dataset covering this language and it's diacritic combinations. Honestly the possibilities are infinite and i don't know how we can generalize a model for this. It is surely possible but i'm not as experienced in ML. I'd really like to get ideas on this. Talking about dataset, I think im gonna do something about diacritics included dataset in the future. I have plans but not the time to execute it to its fullest, and also that the response and impact is very less.

 

cross-posted from: https://lemm.ee/post/61282397

Open sourcing this project I made in just a weekend, planning to continue this in my free time, with synthetic data gen and some more modifications, anyone is welcome to chip in, I'm not an expert in ML. The inference is live here using tensorflow.js. The model is just 1.92 Megabytes!

 

Open sourcing this project I made in just a weekend, planning to continue this in my free time, with synthetic data gen and some more modifications, anyone is welcome to chip in, I'm not an expert in ML. The inference is live here using tensorflow.js. The model is just 1.92 Megabytes!

[–] thickertoofan@lemm.ee 4 points 4 weeks ago

Nice to know. Thanks.

[–] thickertoofan@lemm.ee 2 points 1 month ago

Same, I have an HDD from 2012 which has my childhood memories. First thing I'm gonna do is to get it fixed from a reputed service when I start earning.

[–] thickertoofan@lemm.ee 4 points 1 month ago (4 children)

Ooof. 700mb discs

[–] thickertoofan@lemm.ee 12 points 1 month ago

Everything was. Is ...

[–] thickertoofan@lemm.ee 3 points 1 month ago
[–] thickertoofan@lemm.ee 2 points 1 month ago
 

cross-posted from: https://lemm.ee/post/59714239

Some custom filter kernel to average out values from a chunk of pixels with some kind of "border aware" behaviour?

 

Some custom filter kernel to average out values from a chunk of pixels with some kind of "border aware" behaviour?

 

something like docker run xyz_org/xyz_model

[–] thickertoofan@lemm.ee 2 points 1 month ago

I think the bigger bottleneck is SLAM, running that is intensive, it wont directly run on video, and SLAM is tough i guess, reading the repo doesn't give any clues of it being able to run on CPU inference.

 

I don't care a lot about mathematical tasks, but code intellingence is a minor preference but the most anticipated one is overall comprehension, intelligence. (For RAG and large context handling) But anyways any benchmark with a wide variety of models is something I am searching for, + updated.

 

I tested this (reddit link btw) for Gemma 3 1B parameter and the 3B parameter model. 1B failed, (not surprising) but 3B passed which is genuinely surprising. I added a random paragraph about Napoleon Bonaparte (just a random character) and added "My password is = xxx" in between the paragraph. Gemma 1B couldn't even spot it, but Gemma 3B did it without asking, but there's a catch, Gemma 3 associated the password statement to be a historical fact related to Napoleon lol. Anyways, passing it is a genuinely nice achievement for a 3B model I guess. And it was a single paragraph, moderately large for the test. I accidentally wiped the chat otherwise i would have attached the exact prompt here. Tested locally using Ollama and PageAssist UI. My setup: GPU poor category, CPU inference with 16 Gigs of RAM.

 

I see this error when I'm trying to upload an icon image for a community I've recently created:

{"data":{"error":"pictrs_response_error","message":"Your account is too new to upload images"},"state":"success"}

I suppose, if the state of upload was success, and assuming the API output is correct, that the image either got uploaded or got denied after upload.
It seems like we can do an improvement if there is a bug, that we should do perm check before image upload happens, this way, we can save bandwidth (i mean its negligible but i dont know if it happens in other places like image posts etc.).
And we can prevent useless upload/bandwidth usage (which i dont think happens in this case) and if this doesnt happen, then the API has a bug of giving a false status message? Just discussing here before raising an enhancement issue on the github repo. The bug is either of the two cases, I'm not sure.

 

Join if you want to have some geek discussions about it, or ask for help/ provide help.

!flask@lemm.ee

view more: next ›