this post was submitted on 17 Nov 2023
0 points (NaN% liked)

Technology

59099 readers
3185 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Signal’s president reveals the cost of running the privacy-preserving platform—not just to drum up donations, but to call out the for-profit surveillance business models it competes against.

The encrypted messaging and calling app Signal has become a one-of-a-kind phenomenon in the tech world: It has grown from the preferred encrypted messenger for the paranoid privacy elite into a legitimately mainstream service with hundreds of millions of installs worldwide. And it has done this entirely as a nonprofit effort, with no venture capital or monetization model, all while holding its own against the best-funded Silicon Valley competitors in the world, like WhatsApp, Facebook Messenger, Gmail, and iMessage.

Today, Signal is revealing something about what it takes to pull that off—and it’s not cheap. For the first time, the Signal Foundation that runs the app has published a full breakdown of Signal’s operating costs: around $40 million this year, projected to hit $50 million by 2025.

Signal’s president, Meredith Whittaker, says her decision to publish the detailed cost numbers in a blog post for the first time—going well beyond the IRS disclosures legally required of nonprofits—was more than just as a frank appeal for year-end donations. By revealing the price of operating a modern communications service, she says, she wanted to call attention to how competitors pay these same expenses: either by profiting directly from monetizing users’ data or, she argues, by locking users into networks that very often operate with that same corporate surveillance business model.

“By being honest about these costs ourselves, we believe that helps provide a view of the engine of the tech industry, the surveillance business model, that is not always apparent to people,” Whittaker tells WIRED. Running a service like Signal—or WhatsApp or Gmail or Telegram—is, she says, “surprisingly expensive. You may not know that, and there’s a good reason you don’t know that, and it’s because it’s not something that companies who pay those expenses via surveillance want you to know.”

Signal pays $14 million a year in infrastructure costs, for instance, including the price of servers, bandwidth, and storage. It uses about 20 petabytes per year of bandwidth, or 20 million gigabytes, to enable voice and video calling alone, which comes to $1.7 million a year. The biggest chunk of those infrastructure costs, fully $6 million annually, goes to telecom firms to pay for the SMS text messages Signal uses to send registration codes to verify new Signal accounts’ phone numbers. That cost has gone up, Signal says, as telecom firms charge more for those text messages in an effort to offset the shrinking use of SMS in favor of cheaper services like Signal and WhatsApp worldwide.

Another $19 million a year or so out of Signal’s budget pays for its staff. Signal now employs about 50 people, a far larger team than a few years ago. In 2016, Signal had just three full-time employees working in a single room in a coworking space in San Francisco. “People didn’t take vacations,” Whittaker says. “People didn’t get on planes because they didn’t want to be offline if there was an outage or something.” While that skeleton-crew era is over—Whittaker says it wasn’t sustainable for those few overworked staffers—she argues that a team of 50 people is still a tiny number compared to services with similar-sized user bases, which often have thousands of employees.

read more: https://www.wired.com/story/signal-operating-costs/

archive link: https://archive.ph/O5rzD

you are viewing a single comment's thread
view the rest of the comments
[–] WallEx@feddit.de 0 points 11 months ago (1 children)

Because there are no other possible verifications apart from phone numbers? Do you open a bank account with your phone number, because it's the only way?

[–] tja@sh.itjust.works 0 points 11 months ago (3 children)

What would you think would be an appropriate alternative to easily verify chat accounts that's cheaper than validating phone numbers?

[–] iopq@lemmy.world 0 points 11 months ago (2 children)

Use a 3d face scan, but only send the hash over the net. Can double for account recovery (when user has no email or something)

[–] scorpionix@feddit.de 0 points 11 months ago

Where would one get a 3d face scan from? For my part, I don't have a scanning rig set up anywhere.

[–] PlexSheep@feddit.de 0 points 11 months ago (2 children)

That's a joke right?

If not: It does not matter what hash I send, because it's cryptographically impossible to tell what the hashed thing is. That is the whole point of a hash.

Also: sending a hash over the network instead of a password or whatever the source material is would be a bad practice from security perspective, if not a directly exploitable vulnerability. It would mean that anyone that knows the hash can pretend to be you, because the hash would be used to authenticate and not whatever the source material is. The hash would become the real password and the source material nothing more than a mnemonic for the user. Adding to that: the server storing the hash would store a plaintext password.

See: https://security.stackexchange.com/questions/8596/https-security-should-password-be-hashed-server-side-or-client-side

[–] uis@lemmy.world 0 points 11 months ago (1 children)

It would mean that anyone that knows the hash can pretend to be you, because the hash would be used to authenticate and not whatever the source material is.

Guess what happens to passwords themselves? Same thing, but user can't just add nonce. Replay attacks are super easy to mitigate and hashing makes it easier.

Not saying that biometry authentication isn't shit for security itself.

[–] PlexSheep@feddit.de 0 points 11 months ago (1 children)

Honestly, I'm not sure what you are talking about. Could you elaborate more?

Are you implying that sending some hash is better than sending the secret and let the server deal with it?

[–] uis@lemmy.world 0 points 11 months ago (1 children)

It took a long time to reply to you, sorry.

When used for login, it prevents MITM attacker(assuming you are not using app sent to you by attacker) from stealing your password(because hash functions are extremely hard to reverse), while when used both for registration and login, your password doesn't even leave your computer. There are even password managers that don't store any passwords, but just generate them by hashing your secret with server name.

[–] PlexSheep@feddit.de 0 points 11 months ago (1 children)

How does this prevent MITM attacks? The secret you send to the server, be it called hash or password, is what's used to authenticate the user. For the purpose of client/server communication, this "password" on your host only is not relevant, as it's only used to generate the real secret.

A hypothetical MITM attacker would still gain access to that secret, without needing to care how it was generated, be it by hashing something on your host or by coming up with semi random letters yourself.

The secret sent to the server becomes the defacto password.

Now about those password managers, they are a thing but I don't have experience using them. Through a disadvantage is that if a site gets breached you have to do something weird with your password manager, so that a different password is produced with your secret key and the domain name. This can be done with a counter that needs to be manually adjusted, but that's weird from a usability point of view.

[–] uis@lemmy.world 0 points 11 months ago (1 children)

How does this prevent MITM attacks? The secret you send to the server, be it called hash or password, is what's used to authenticate the user.

Maybe I phrased incorrectly. It prevents attacker from getting password and using it again in future.

For the purpose of client/server communication, this "password" on your host only is not relevant, as it's only used to generate the real secret.

Salted hash if not implemented with possible MITM attacks in mind indeed can be used by attacker. Resisting them is easy and can be done by channel binding techniques like using channel public key as part of salt. In such case if attacker successfully will make MITM attack, server will just reject hash, because it is not equal with expected one.

The secret sent to the server becomes the defacto password.

Passwords are secrets. Secrets aren't passwords.

but that's weird from a usability point of view.

HOTP exists. HOTP is used.

[–] PlexSheep@feddit.de 0 points 11 months ago (1 children)

Maybe I phrased incorrectly. It prevents attacker from getting password and using it again in future.

In what circumstances besides reusing passwords does this matter?

To make this discussion extra long: If you're creating a hash based on a local password, then share this as secret to the server, which then treats it with regular password security, this is beneficial for security as far as I can see, as it makes sure that the "password"/secret is strong and pseudo random.

[–] uis@lemmy.world 0 points 11 months ago* (last edited 11 months ago) (1 children)

In what circumstances besides reusing passwords does this matter?

Happens more then you imagine.

To make this discussion extra long: If you're creating a hash based on a local password, then share this as secret to the server, which then treats it with regular password security, this is beneficial for security as far as I can see, as it makes sure that the "password"/secret is strong and pseudo random.

Didn't I mention two parts where hashing can be used? Let's take lemmy as an example. There is /login endpoint that takes username and password and returns token and there is /register endpoint that takes lots of arguments including username and password. Hashing you are talking about now is replacing plain-text password with generated secret. It prevents server from knowing password that is used for generation of other secrets on other platforms. Now there is also hypothetical /gettmptok and /verify endpoints. First takes username and returns temproary token and second takes username, temproary token and hash of password salted with (public) key of channel and temproary token and returns... let's say boolean value, which means this hash becomes valid token. If attacker tries to MITM here, server will reject token because it will not match expected hash because salt is wrong. Even without channel binding attacker cannot get secret to login again in case user logsout of session or forcefuly closes it from another one or token is invalidated for any other reason.

Got it EXTRA long.

[–] PlexSheep@feddit.de 0 points 11 months ago

I fail to see how this prevents any MITM attack where the attacker pretenta to be the server, but besides that, that just seems overly complicated.

[–] iopq@lemmy.world 0 points 11 months ago (1 children)

The point is to protect your face data, the hash IS the password, but you don't want people to be able to tell how you look like by sending the raw images of your face over the net

[–] PlexSheep@feddit.de 0 points 11 months ago (1 children)

That would do nothing to validate that the user is real, they can just insert any hash and claim it's their face's hash. At that point we can just use regular passwords, but as I said that won't solve the spam Accounts issue.

[–] iopq@lemmy.world 0 points 11 months ago (1 children)

You can make sure that the user used the signed binary to generate the token. Each token has a nonce and a validity period. This binary requires the use of the camera API, but also requires liveness analysis by making you move while authenticating. You can change the way the user is forced to move to make sure it's not the same video feed connected to the camera

[–] PlexSheep@feddit.de 0 points 11 months ago

Could work, but it doesn't stop actual people from creating spam Accounts.

If one wants to put real effort into it, the camera/gyro sensors could be malicious or a robotic arm could be built. Maybe it would work with some fake background.

[–] WallEx@feddit.de -1 points 11 months ago (2 children)

Video call, email, other verificated factors.

So do you think this is the only option available?

[–] Dark_Arc@social.packetloss.gg 1 points 11 months ago* (last edited 11 months ago) (1 children)

You think a verification via a video call is cheaper than SMS...?

That's not to mention the potential concerns that would arise around the possibility of signal storing (some portion of) the video...

[–] WallEx@feddit.de -1 points 11 months ago (2 children)

Nope, just saying phone numbers are far from the only option. And if telcos are price gauging you should look at the alternatives.

[–] Gutless2615@ttrpg.network 1 points 11 months ago (1 children)

No you’ve complained and insinuated there are plenty of other solutions that the world class team at Signal, literally the preminent experts in their field, chose not to use - and then offered to some truly next level terrible options.

[–] WallEx@feddit.de 0 points 11 months ago

Complained? I've merely stated a fact. And you think I'm offended? I'm trying to have a discussion you are not interested in it seems.

How are the other options terrible? Please elaborate. That way you might actually contribute and not just call names.

[–] Dark_Arc@social.packetloss.gg 0 points 11 months ago (1 children)

Nope, just saying phone numbers are far from the only option.

What would you think would be an appropriate alternative to easily verify chat accounts that's cheaper than validating phone numbers?

It's the cheaper portion that's the issue. There are "other options", but they're not cheaper and/or they have their own issues.

I didn't touch the email case because email addresses can be so rapidly created (even out of thin air via a catch all style inbox) there's nothing to it.

[–] WallEx@feddit.de 0 points 11 months ago

But if telcos are inflating the prices that might change. But otherwise I think you're right.

[–] PlexSheep@feddit.de 0 points 11 months ago (2 children)

Video call is expensive, and frankly, if I'm gonna sign up at a private service, I'm not going to make a damn video call.

Email is not enough to go against spam. Email addresses are basically an Infinite Ressource.

Other verified factors are nothing concrete. Sure we could all use security hardware keys, but what's the chances that my mom has one?

[–] WallEx@feddit.de 0 points 11 months ago (2 children)

So you do think that phone numbers are the only way to verify the person? This is just stupid. There are enough, like IDs or stuff like that. If you don't want that, that's a totally different story.

[–] LemmyIsFantastic@lemmy.world 0 points 11 months ago* (last edited 11 months ago) (3 children)

Jesus Christ you Linux people never learn... It's 👏 about 👏 ease of 👏 use.

If they wanted it to be a pain in the ass and for nobody to use they could put on a ui on top of pgp and call it a day.

[–] PlexSheep@feddit.de 0 points 11 months ago

There was no need to generalize Linux people. This discussion has nothing to do with Linux.

[–] WallEx@feddit.de 0 points 11 months ago (1 children)

How does that have anything to do with Linux? It's about phone verification as the supposed only option.

Does Microsoft need your phone to validate your existence?

How does anyone think, that there are no alternatives?

[–] LemmyIsFantastic@lemmy.world 0 points 11 months ago (1 children)

Yes. MS heavily uses sms to validate my account and is pushing to passwordless sent to mobile auth.

[–] WallEx@feddit.de 0 points 11 months ago (1 children)

Okay. And how are phone numbers validated? Not by using phone numbers. It's not the only option. They also use personalized domains, certificates, IDs and the likes.

[–] LemmyIsFantastic@lemmy.world 0 points 11 months ago (1 children)

Right, folks are definitely going to sign up when it just needs you to copy you identity information and send it in and wait 4 weeks 🤦‍♂️

Yes, there is a whole bunch of pain in the ass shit you can try to force prime to use. They won't, and the service will be worthless for all but 5 neckbeards laughing about how private they are. 🤦‍♂️

[–] WallEx@feddit.de 0 points 11 months ago

Probably. Just saying it's not "the only option". And I'm also pretty sure they could figure out another way to ID people, if they had enough funds to do so. But maybe this still wouldn't be adopted, who knows.

[–] TheBat@lemmy.world 0 points 11 months ago (1 children)

This comment chain is sending me lol

How the hell this guy doesn't understand how effective phone verification is when it comes to combating spam/bots?

[–] WallEx@feddit.de 0 points 11 months ago (1 children)

I'm not arguing that, I'm arguing the point, that this is the only option. Because it isn't. If you find that funny, be my guest.

[–] PlexSheep@feddit.de 0 points 11 months ago* (last edited 11 months ago) (1 children)

What alternative to phone numbers would you recommend? I'd probably prefer it over giving my phone number away.

[–] WallEx@feddit.de 0 points 11 months ago (1 children)

Something like a verified work mail or a cryptographic certificate protected with a password, confirming your identity, I don't really know ^^ but phone numbers are old and are getting more and more expensive, as the article lays out

[–] PlexSheep@feddit.de 0 points 11 months ago

The infrastructure for none of these exist (in my country at least). Phone numbers suck, but as signal is a application mostly used on phones, I think it is the most common denominator for the user base.

[–] PlexSheep@feddit.de 0 points 11 months ago (1 children)

It's a bad problem no? Combatting "spam" Accounts while balancing privacy.

Personally, I don't want to give them any more information than is really necessary.

[–] WallEx@feddit.de 0 points 11 months ago

It's not easy. And yeah, me too.

[–] uis@lemmy.world 0 points 11 months ago (1 children)

Other verified factors are nothing concrete. Sure we could all use security hardware keys, but what's the chances that my mom has one?

PKI doesn't require hardware keys

[–] PlexSheep@feddit.de 0 points 11 months ago (1 children)

True, but it's not exactly User friendly too, right? If not, tell me. I'll be happy.

[–] uis@lemmy.world 0 points 11 months ago* (last edited 11 months ago) (1 children)

If you want user-friendly WebAuthn - firefox does it for you. If you want pgp/gpg, then just install pgp/gpg client of your choice.

If you want encrypt emails, Thunderbird should have built-in encryption support.

[–] PlexSheep@feddit.de 0 points 11 months ago

I'm using all of these, but with my hardware keys. Didn't know you could do it without. I knew that it was part of the webauthn concept but no idea how it works.

[–] devfuuu@lemmy.world -3 points 11 months ago

I'd be ok with a credit card verification or so something like that, even if still uncomfortable for me, but I hear it reduces a lot of spam.

But then that would make people confused and make them run away when the app seems to be free and now is asking for a credit card validation... it's too strange.

Anyway I never got a single spam message on signal from all the years I use it, so not sure how others view the problem or even if it is a problem.