I love the comparison of string length of the same UTF-8 string in four programming languages (only the last one is correct, by the way):

Python 3:

len(“🤦🏼‍♂️”)

5

JavaScript / Java / C#:

“🤦🏼‍♂️”.length

7

Rust:

println!(“{}”, “🤦🏼‍♂️”.len());

17

Swift:

print(“🤦🏼‍♂️”.count)

1

That depends on your definition of correct lmao. Rust explicitly counts utf-8 scalar values, because that’s the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.

And rust also has the “🤦”.chars().count() which returns 1.

I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.

Also also the len function clearly states:

This length is in bytes, not chars or graphemes. In other words, it might not be what a human considers the length of the string.

That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.

The way to get a proper grapheme count in Rust is e.g. via this library: https://crates.io/crates/unicode-segmentation

lemmyvore
link
fedilink
English
61Y

None of these languages should have generic len() or size() for strings, come to think of it. It should always be something explicit like bytes() or chars() or graphemes(). But they’re there for legacy reasons.

Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it’s also kind of mad to put something like this into a stdlib.

Its behaviour will break with each new Unicode standard. And you’d have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.

Treeniks
link
fedilink
3
edit-2
1Y

The way UTF-8 works is fixed though, isn’t it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.

Plus in Rust, you can instead use .chars().count() as Rust’s char type is UTF-8 Unicode encoded, thus strings are as well.

turns out one should read the article before commenting

No offense, but did you read the article?

You should at least read the section “Wouldn’t UTF-32 be easier for everything?” and the following two sections for the context here.

So, everything you’ve said is correct, but it’s irrelevant for the grapheme count.
And you should pretty much never need to know the number of codepoints.

Treeniks
link
fedilink
31Y

yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.

No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. 🫠

ono
link
fedilink
English
11Y

It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.

Create a post

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person’s post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you’re posting long videos try to add in some form of tldr for those who don’t want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



  • 1 user online
  • 1 user / day
  • 1 user / week
  • 1 user / month
  • 1 user / 6 months
  • 1 subscriber
  • 1.21K Posts
  • 17.8K Comments
  • Modlog