space_comrade [he/him]
link
fedilink
English
01Y

deleted by creator

Phoenixz
link
fedilink
21Y

Just give me plain UTF32 with ~@4 billion code points, that really should be enough for any symbol ee can come up with. Give everything it’s own code point, no bullshit with combined glyphs that make text processing a nightmare. I need to be able to do a strlen either on byte length or amount of characters without the CPU spendings minute to count each individual character.

I think Unicode started as a great idea and the kind of blubbered into aimless “everybody kinda does what everyone wants” territory. Unicode is for humans, sure, but we shouldn’t forget that computers actually have to do the work

I do understant why old unicode versions re-used “i” and “I” for turkish lowercase dotted i and turkish uppercase dotless I, but I don’t understand why more recent version have not introduce two new characters that looks exactly the same but who don’t require locale-dependant knowlege to do something as basic as “to lowercase”.

Probably for the same reason Spanish used to consider ch, ll, and rr as a single character.

Kevin Lyda
link
fedilink
English
121Y

The mouse pointer background is kinda a dick move. Good article. but the background is annoying for tired old eyes - which I assume are a target demographic for that article.

Wow this is awful on mobile lol

Kevin Lyda
link
fedilink
English
51Y

js console: document.querySelector('.pointers').hidden=true

Hazelnoot [she/her]
link
fedilink
English
21Y

Thank you for this! You can also get rid of it with a custom ad-blocker rule. I added these to uBlock Origin, and it totally kills the pointer thing.

wss://tonsky.me
http://tonsky.me/pointers/
https://tonsky.me/pointers/
heftig
link
fedilink
21Y

You’re actually seeing mouse pointers of other people having the page open. It connects to a websocket endpoint including the page URL and your platform (OS) and sends your current mouse position every second.

Kevin Lyda
link
fedilink
English
61Y

Just because you can do something…

I’m personally waiting for utf-64 and for unicode to go back to fixed encoding and forgetting about merging code points into complex characters. Just keep a zeptillion code points for absolutely everything.

Because strings are such a huge problem nowadays, every single software developer needs to know the internals of them. I can’t even stress it enough, strings are such a burden nowadays that if you don’t know how to encode and decode one, you’re beyond fucked. It’ll make programming so difficult - no even worse, nigh impossible! Only those who know about unicode will be able to write any meaningful code.

I’m still sour about text having color. Yeah I know little icons peppered forums. That’s why people liked reddit! It got rid of that shit! Now it’s part of the universal standard? Not just the ability to draw a turd on someone’s monitor, but to have it be colored-in brown? The hell with that. You wanna have animated GIFs next? Let someone put their username in ? Or like right-alignment, make rainbow signatures a free gimmick that text engines have to live with.

Meanwhile the alphabet of upside-down or small-caps letters are still incomplete.

I agree that having some glyphs in color can be bad, for example when you are typesetting a formula in TeX that contains emoji, the color looks just unprofessional. As a solution, let me introduce you to the Noto Emoji font: https://fonts.google.com/noto/specimen/Noto+Emoji

As a developer, I feel absolute pain for the people who had to convert these. There’s quite some edge cases and sensitive topics to dodge here, and doing something wrong might piss people off. They must’ve had some lengthy meetings about a few emoji.

@Blackmist@feddit.uk
link
fedilink
English
21Y

˙ƃuıʎouuɐ ʎʃʃɐǝɹ s,ʇı 'ʇɥƃıɹ ʍouʞ I

JackbyDev
link
fedilink
English
2
edit-2
1Y

𝖄𝖔𝖚 𝖘𝖔𝖗𝖙 𝖔𝖋 𝖊𝖓𝖉 𝖚𝖕 𝖍𝖆𝖛𝖎𝖓𝖌 𝖋𝖔𝖓𝖙𝖘 𝖊𝖒𝖇𝖊𝖉𝖉𝖊𝖉 𝖎𝖓 𝖙𝖍𝖊 𝖊𝖓𝖈𝖔𝖉𝖎𝖓𝖌. 𝕴 𝖚𝖓𝖉𝖊𝖗𝖘𝖙𝖆𝖓𝖉 𝖜𝖍𝖞 𝖎𝖙 𝖍𝖆𝖕𝖕𝖊𝖓𝖘, 𝕴’𝖒 𝖓𝖔𝖙 𝖘𝖆𝖞𝖎𝖓𝖌 𝖎𝖙 𝖘𝖍𝖔𝖚𝖑𝖉𝖓’𝖙, 𝖇𝖚𝖙 𝖎𝖙’𝖘 𝖘𝖙𝖎𝖑𝖑 𝖆 𝖜𝖊𝖎𝖗𝖉 𝖘𝖎𝖉𝖊 𝖊𝖋𝖋𝖊𝖈𝖙.

As normal letters in case your screen cannot render it.

You sort of end up having fonts embedded in the encoding. I understand why it happens, I’m not saying it shouldn’t, but it’s still a weird side effect.

And it risks sites like this entering an arms race for attention-grabbing bullshit, where every post tries not to look like plain text. This didn’t really happen to reddit because the old guard (hello) were curmudgeons. Happened to Craigslist and eBay, though, where the attention-whore behavior is directly monetized.

LaggyKar
link
fedilink
81Y

If you go to the page without the trailing slash, the images don’t load

I love the comparison of string length of the same UTF-8 string in four programming languages (only the last one is correct, by the way):

Python 3:

len(“🤦🏼‍♂️”)

5

JavaScript / Java / C#:

“🤦🏼‍♂️”.length

7

Rust:

println!(“{}”, “🤦🏼‍♂️”.len());

17

Swift:

print(“🤦🏼‍♂️”.count)

1

That depends on your definition of correct lmao. Rust explicitly counts utf-8 scalar values, because that’s the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.

Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it’s also kind of mad to put something like this into a stdlib.

Its behaviour will break with each new Unicode standard. And you’d have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.

ono
link
fedilink
English
11Y

It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.

Treeniks
link
fedilink
3
edit-2
1Y

The way UTF-8 works is fixed though, isn’t it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.

Plus in Rust, you can instead use .chars().count() as Rust’s char type is UTF-8 Unicode encoded, thus strings are as well.

turns out one should read the article before commenting

No offense, but did you read the article?

You should at least read the section “Wouldn’t UTF-32 be easier for everything?” and the following two sections for the context here.

So, everything you’ve said is correct, but it’s irrelevant for the grapheme count.
And you should pretty much never need to know the number of codepoints.

Treeniks
link
fedilink
31Y

yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.

No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. 🫠

And rust also has the “🤦”.chars().count() which returns 1.

I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.

Also also the len function clearly states:

This length is in bytes, not chars or graphemes. In other words, it might not be what a human considers the length of the string.

That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.

The way to get a proper grapheme count in Rust is e.g. via this library: https://crates.io/crates/unicode-segmentation

lemmyvore
link
fedilink
English
61Y

None of these languages should have generic len() or size() for strings, come to think of it. It should always be something explicit like bytes() or chars() or graphemes(). But they’re there for legacy reasons.

@simple@lemm.ee
link
fedilink
English
35
edit-2
1Y

Now this is UX. Wonderful stuff.

Screenshot of the page showing me 20 mouse cursors moving across the page

I love it. People should be having more fun with their own personal sites.

ono
link
fedilink
English
21Y

deleted by creator

And the site’s dark mode is fantastic…

Virkkunen
link
fedilink
11Y

This one really got a laugh out of me

snooggums
link
fedilink
41Y

Best dark mode ever!

Lol, who turned the lights out?

JackbyDev
link
fedilink
English
61Y

@AeroLemming@lemm.ee
link
fedilink
English
21Y

I didn’t realize this was sarcastic and was getting ready to post about how broken it looks for me.

flamingos-cant
link
fedilink
English
151Y

Thank god for reader view because this makes me feel physically sick to look at.

Hazelnoot [she/her]
link
fedilink
English
21Y

Right?? I normally love it when websites have a fun twist, but this one really needs an off button. The other cursors keep covering the text and it becomes genuinely uncomfortable to read. Fortunately, you can easily block the WS endpoint with any ad blocker.

neo (he/him)
link
fedilink
English
11Y

Same.

interolivary
link
fedilink
41Y

The horror

@Blackmist@feddit.uk
link
fedilink
English
11Y

Is that other readers’ mouse pointers?

The article sure mentions 💩a lot.

currency symbols other than the $ (kind of tells you who invented computers, doesn’t it?)

Who wants to tell the author that not everything was invented in the US? (And computers certainly weren’t)

Deebster
link
fedilink
English
6
edit-2
1Y

The stupid thing is, all the author had to do was write “kind of tells you who invented ASCII” and he’d have been 100% right in his logic and history.

Where were computers invented in your mind? You could define computer multiple ways but some of the early things we called computers were indeed invented in the US, at MIT in at least one case.

@lucas@startrek.website
link
fedilink
English
61Y

Well, it’s not really clear-cut, which is part of my point, but probably the 2 most significant people I could think of would be Babbage and Turing, both of whom were English. Definitely could make arguments about what is or isn’t considered a ‘computer’, to the point where it’s fuzzy, but regardless of how you look at it, ‘computers were invented in America’ is rather a stretch.

‘computers were invented in America’ is rather a stretch.

Which is why no one said that. I read most of the article and I’m still not sure what you were annoyed about. I didn’t see anything US-centric, or even anglocentric really.

@lucas@startrek.website
link
fedilink
English
51Y

To say I’m annoyed would be very much overstating it, just a (very minor) eye-roll at one small line in a generally very good article. Just the bit quoted:

currency symbols other than the $ (kind of tells you who invented computers, doesn’t it?)

So they could also be attributing it to some other country that uses $ for their currency, which is a few, but it seems most likely to be suggesting USD.

Deebster
link
fedilink
English
51Y

I think the author’s intended implication is absolutely that it’s a dollar because the USA invented the computer. The two problems I have is that:

  1. He’s talking about the American Standard Code for Information Interchange, not computers at that point
  2. Brits or Germans invented the computer (although I can’t deny that most of today’s commercial computers trace back to the US)

It’s just a lazy bit of thinking in an otherwise excellent and internationally-minded article and so it stuck out to me too.

Was actually a great read. I didn’t realize there were so many ways to encode the same character. TIL.

TehPers
link
fedilink
English
101Y

The only modern language that gets it right is Swift:

print("🤦🏼‍♂️".count)
// => 1

Minor, but I’m not sure this is as unambiguous as the article claims. It’s true that for someone “that isn’t burdened with computer internals” that this is the most obvious “length” of the string, but programmers are by definition burdened with computer internals. That’s not to say the length shouldn’t be 1 though, it’s more that the “length” field/property has a terrible name, and asking for the length of a string is a very ambiguous question to begin with.

Instead, I think a better solution is to be clear what length you’re actually referring to. For example, with Rust, the .len() method documents itself as the number of bytes in the string and warns that it may not be what you’re interested in. Similarly, .chars() clarifies that it iterates over Unicode Scalar Values, and not grapheme clusters (and that grapheme clusters are unfortunately not handled by the standard library).

For most high level applications, I think you generally do want to work with grapheme clusters, and what Swift does makes sense (assuming you can also iterate over the individual bytes somehow for low level operations). As long as it is clearly documented what your “length” refers to, and assuming the other lengths can be calculated, I think any reasonably useful length is valid.

The article they link in that section does cover a lot of the nuances between them, and is a great read for more discussion around what the length should be.

Edit: I should also add that Korean, for example, adds some additional complexity to it. For example, what’s the string length of 각? Is it 1, because it visually consumes a single “space”? Or is it 3 because it’s 3 letters (ㄱ, ㅏ, ㄱ)? Swift says the length is 1.

If we’re being really pedantic, the last part in Korean is counted with different units:

  • 각 as precomposed character: 1자 (unit ja for CJK characters)
  • 각 (ㄱㅏㄱ) as decomposable components: 3자모 (unit jamo for Hangul components)

So we could have separate implementations of length() where we count such cases with different criteria… But I wouldn’t expect non-speakers of Korean know all of this.

Plus, what about Chinese characters? Are we supposed to count 人 as one but 仁 as one (character) or two (radicals)? It gets only more complicated.

Unicode is thoroughly underrated.

UTF-8, doubly so. One of the amazing/clever things they did was to build off of ASCII as a subset by taking advantage of the extra bit to stay backwards compatible, which is a lesson we should all learn when evolving systems with users (your chances of success are much better if you extend than to rewrite).

On the other hand, having dealt with UTF-7 (a very “special” email encoding), it takes a certain kind of nerd to really appreciate the nuances of encodings.

I’ve recently come to appreciate the “refactor the code while you write it” and “keep possible future changes in mind” ideas more and more. I think it really increases the probability that the system can live on instead of becoming obsolete.

Yes, but once code becomes too spaghetti such that a “refactor while you write it” becomes too time intensive and error prone, it’s already too late.

JackbyDev
link
fedilink
English
2
edit-2
1Y

Unrelated, but what do you think (if anything) might end up being used by the last remaining reserved bit in IP packet header flags?

https://en.wikipedia.org/wiki/Evil_bit

https://en.wikipedia.org/wiki/Internet_Protocol_version_4#Header

Create a post

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person’s post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you’re posting long videos try to add in some form of tldr for those who don’t want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



  • 1 user online
  • 1 user / day
  • 1 user / week
  • 1 user / month
  • 1 user / 6 months
  • 1 subscriber
  • 1.21K Posts
  • 17.8K Comments
  • Modlog