this post was submitted on 02 Oct 2023
0 points (NaN% liked)

Programming

17326 readers
233 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
top 18 comments
sorted by: hot top controversial new old
[–] abhibeckert@lemmy.world 0 points 1 year ago* (last edited 1 year ago) (1 children)

I love the comparison of string length of the same UTF-8 string in four programming languages (only the last one is correct, by the way):

Python 3:

len("πŸ€¦πŸΌβ€β™‚οΈ")

5

JavaScript / Java / C#:

"πŸ€¦πŸΌβ€β™‚οΈ".length

7

Rust:

println!("{}", "πŸ€¦πŸΌβ€β™‚οΈ".len());

17

Swift:

print("πŸ€¦πŸΌβ€β™‚οΈ".count)

1

[–] Walnut356@programming.dev 0 points 1 year ago* (last edited 1 year ago) (2 children)

That depends on your definition of correct lmao. Rust explicitly counts utf-8 scalar values, because that's the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.

[–] Black616Angel@feddit.de 0 points 1 year ago (1 children)

And rust also has the "🀦".chars().count() which returns 1.

I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.

Also also the len function clearly states:

This length is in bytes, not chars or graphemes. In other words, it might not be what a human considers the length of the string.

[–] Knusper@feddit.de 0 points 1 year ago (1 children)

That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.

The way to get a proper grapheme count in Rust is e.g. via this library: https://crates.io/crates/unicode-segmentation

[–] Djehngo@lemmy.world 0 points 1 year ago (1 children)

Makes sense, the code-points split is stable; meaning it's fine to put in the standard library, the grapheme split changes every year so the volatility is probably better off in a crate.

[–] Knusper@feddit.de 0 points 1 year ago (1 children)

Yeah, although having now seen two commenters with relatively high confidence claiming that counting codepoints ought be enough...

...and me almost having been the third such commenter, had I not decided to read the article first...

...I'm starting to feel more and more like the stdlib should force you through all kinds of hoops to get anything resembling a size of a string, so that you gladly search for a library.

Like, I've worked with decoding strings quite a bit in the past, I felt like I had an above average understanding of Unicode as a result. And I was still only vaguely aware of graphemes.

[–] Turun@feddit.de 0 points 1 year ago (1 children)

For what it's worth, the documentation is very very clear on what these methods return. It explicitly redirects you to crates.io for splitting into grapheme clusters. It would be much better to have it in std, but I understand the argument that Std should only contain stable stuff.

As a systems programming language the .len() method should return the byte count IMO.

[–] Knusper@feddit.de 0 points 1 year ago

The problem is when you think you know stuff, but you don't. I knew that counting bytes doesn't work, but thought the number of codepoints was what I want. And then knowing that Rust uses UTF-8 internally, it's logical that .chars().count() gives the number of codepoints. No need to read documentation, if you're so smart. πŸ™ƒ

It does give you the correct length in quite a lot of cases, too. Even the byte length looks correct for ASCII characters.

So, yeah, this would require a lot more consideration whether it's worth it, but I'm mostly thinking there'd be no .len() on the String type itself, and instead to get the byte count, you'd have to do .as_bytes().len().

[–] Knusper@feddit.de 0 points 1 year ago (1 children)

Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it's also kind of mad to put something like this into a stdlib.

Its behaviour will break with each new Unicode standard. And you'd have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.

[–] Treeniks@lemmy.ml 0 points 1 year ago* (last edited 1 year ago) (1 children)

~~The way UTF-8 works is fixed though, isn't it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.~~

~~Plus in Rust, you can instead use .chars().count() as Rust's char type is UTF-8 Unicode encoded, thus strings are as well.~~

turns out one should read the article before commenting

[–] Knusper@feddit.de 0 points 1 year ago (1 children)

No offense, but did you read the article?

You should at least read the section "Wouldn’t UTF-32 be easier for everything?" and the following two sections for the context here.

So, everything you've said is correct, but it's irrelevant for the grapheme count.
And you should pretty much never need to know the number of codepoints.

[–] Treeniks@lemmy.ml 0 points 1 year ago (1 children)

yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.

[–] Knusper@feddit.de 0 points 1 year ago

No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. 🫠

[–] Knusper@feddit.de 0 points 1 year ago

They believed 65,536 characters would be enough for all human languages.

Gotta love these kind of misjudgements. Obviously, they were pushing against pretty hard size restrictions back then, but at the same time, they did have the explicit goal of fitting in all languages and if you just look at the Asian languages, it should be pretty clear that it's not a lot at all...

[–] atheken@programming.dev 0 points 1 year ago (1 children)

Unicode is thoroughly underrated.

UTF-8, doubly so. One of the amazing/clever things they did was to build off of ASCII as a subset by taking advantage of the extra bit to stay backwards compatible, which is a lesson we should all learn when evolving systems with users (your chances of success are much better if you extend than to rewrite).

On the other hand, having dealt with UTF-7 (a very β€œspecial” email encoding), it takes a certain kind of nerd to really appreciate the nuances of encodings.

[–] Jummit@lemmy.one 0 points 1 year ago (1 children)

I've recently come to appreciate the "refactor the code while you write it" and "keep possible future changes in mind" ideas more and more. I think it really increases the probability that the system can live on instead of becoming obsolete.

[–] Pantoffel@feddit.de 0 points 1 year ago

Yes, but once code becomes too spaghetti such that a "refactor while you write it" becomes too time intensive and error prone, it's already too late.

[–] lucas@startrek.website -1 points 1 year ago

currency symbols other than the $ (kind of tells you who invented computers, doesn’t it?)

Who wants to tell the author that not everything was invented in the US? (And computers certainly weren't)