LMFAO. The audacity of calling the token limit a "rolling context window" like it's a desirable feature and not just a design aspect of every LLM...
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
Yeah that part tripped me up.
"Rolling context window"? You mean one of the universal properties of LLMs in it's current state? The one that is so big for Google's latest AI endeavors that they are flexing with it?
It's hilarious to say that's a privacy feature. It's like calling amnesia a learning opportunity.
These claims make me think this is worse than the R1 rabbit or whatever it's called. Although it's very difficult to be worse, considering the CEO turned out to be a full-on crypto scammer.
Check the edit for instructions on how to build your own. It's even called "Friend," so "friend" is likely a modified version of that (ChatGPT vs Llama, respectively).
I would certainly feel better about it if I had full control over the encryption endpoints, at a minimum.
I will be waiting for the tech YouTubers and early adopters to render their judgement before I even consider yet another AI wearable, but this aims to be less of a personal assistant and more of a "Tamagotchi."
I think that’s what sets this one apart (and makes it less expensive) from the other devices like this. This thing only needs a mic, an LLM and a Bluetooth radio. It won’t search the whole internet for answers or tell you what you’re looking at, but it will talk shit on that bitch Tonya in accounting with you.
At least tamagotchis had games
Wait, is this the same thing we were ridiculing over on 196? https://lemmy.world/post/18120973
Yep, that's the one. Check my edit, if you want to build your own for ≈$50.
$99? I just ordered the parts for $50 (including shipping and handling and a 100 count on/off switches of which I only need one)
Also confused why they say it is "always on" if it has an off switch. A TV can be always on until you turn it off. Once I build it, I'll see what can be switched around - I am hoping to get something like the superbooga extension for oobabooga (RAG vetorization of documents) working with the transcripts.
Was a bit worried about Whisper STT, but I think it is the open source on device one, not the runs on OpenAI servers version.
You're a champion. Post it up here when you get it rolling.
Can you post your BOM?
I just followed this: https://docs.basedhardware.com/assembly/Buying_Guide/
Added to the post! Great find.
Oh, crap! This is getting confusing. I think this is what happened:
The "Friend.com" AI friend was originally named "Tab". The Basedhardware.com wearable was originally named "Friend", but "AI Friend" was turning up to much stuff, so they added Based hardware to the name.
Then the creator of Tab renamed it "Friend" and bough Friend.com. Both AI wearable, but the Friend.com sounds more closed source than the based hardware Friend. So they are technically two different projects.
I mean, taking open source and turning it into closed source isn't an impossible case. It wouldn't surprise me if he borrowed ideas from the other project, at any rate.
But yeah, definitely different projects, though it remains to be seen how different they are at their core. I'm not spending $100 to find out; I'll let the whales do that.
I actually cant talk shit about this one. So far seems to be what you'd want for the data side. All on device. No subscription. It does come off as weird and still is probably a bad idea to receive advice. But at 99 bucks its gotta be one of the cheapest AI device from a startup so far. I won't get one, but I dont absolutely hate it.
All-on-device AIs that could run on an iPhone would be terrible. Its sending tokens somewhere I guaran-fucking-tee.
The FAQ says that it requires an Internet connection.
It also mentions e2ee, which isn't too reassuring when one of the ends is their servers.
Exactly, there are ways to make the tokens unreadable even in a server hosted LLM but I know for a fact that's not what's going to happen here.
And I fully expect all of our engagement data to be used in the 2028 election to target us.
This was my big concern as well. E2EE only matters if you control each end. That's why I'll let the YouTubers and security analysts dissect it first.
Check my edit for instructions on how to build your own. It's even called "Friend," so it's probably the same thing tweaked for a different LLM.
false advertising law: hello?
Anyone can test if it's sending with a firewall. If it's not connected to the internet, it ain't sending. Don't forget that iPhone chips have been the Moore's law executors.
Do you even have a slight idea how processing intense even 5 year old LLM models are?
And iPhones aren't magically immune to thermodynamic.