• 1 Post
  • 338 Comments
Joined 1Y ago
cake
Cake day: Oct 04, 2023

help-circle
rss

I recall someone who build some automated system to measure input latency on gamepads, who gathered data for a bunch over different interfaces, which is a subset of that. They had some sort of automated testing system, moved the controls automatically with a microcontroller-driven system.

looks

Neither of them are what I’m remembering, but it looks like multiple people have built input latency databases.

https://rpubs.com/misteraddons/inputlatency

https://gamepadla.com/

The second looks close to what you might want. Each controller has a page with a fair amount of information.

EDIT: I don’t think that this is what I was thinking of either, but looks like another microcontroller-based system to measure input latency:

https://github.com/maziac/lagmeter

EDIT2: Also not what I was thinking of, but yet another input latency measurement project:

https://epub.uni-regensburg.de/36811/1/ExtendedAbstractLatencyCHI2018.pdf

EDIT3: Also not what I was thinking of, but another:

https://github.com/finger563/esp-usb-latency-test


https://github.com/matrix-org/matrix-appservice-discord

It looks like there’s software to bridge it to Matrix. I have no idea whether that violates ToS. It does look like it’s getting development, though, so I can’t imagine that Discord has been cutting people off en masse for using it.


I get that.

Honestly, though I’m still a little puzzled as to why people initially got into Discord; I never did.

I can understand why people wanted to use some systems. Twitter does massive-scale real-time indexing. That was a huge feature, really changed what one could do on the platform.

Reddit provided a good syntax (Markdown), had a low barrier to entry (no email verification at a time when that was common), and third-party client access. It solved the spam problem that was killing Usenet and permitted for more-reasonable moderation.

There were a whole host of services that aimed to lower the complexity bar to get a web page and some content online associated with someone’s identity; it was clear that lack of technical knowledge and the technical knowledge required to get stuff up was a real limiting factor for many people.

But I just didn’t really get where Discord provided much of a win over stuff like IRC. I mean, I guess maybe it bundled a couple services into one, which maybe lowered the bar to use a bit. IRC really seemed pretty fine to me. Reddit bundling image-hosting seems to have lowered the bar, been something that people wanted. Maybe Discord doing images and file-hosting made it more-accessible.

I have no idea why a number of people who liked Cataclysm: Dark Days Ahead used Discord rather than Reddit; it seemed like a dramatically-worse system if one was aiming to create material for others to look back at and refer to.

kagis

https://old.reddit.com/r/RedditForGrownups/comments/t417q1/can_someone_please_explain_discord_to_me_like_im/

It’s just modern day IRC with video.

Ahaha, thanks. This is indeed an ELI60 response, although it doesn’t really explain how Discord suddenly got so popular. But if I couple this with /u/Healthy-Car-1860’s response, I’m kind of getting the picture.

Got popular because it spread through the entire gamer/twitch community like wildfire due to actually being a more complete package and easier to use than anything prior. Online gamers have been struggling with voip software forever (Roger Wilco, Teamspeak, Ventrilo, Skype, and many others).

Once it was rooted in the people who are on their computers app day every day it was bound to spread because the UX is incredibly easy compared to previous options for both chat and voip.

Maybe that’s it. I never had a lot of interest in VoIP, especially group VoIP. When I was playing online games much, people used keyboards to communicate, not mics. There was definitely a period where people needed the ability to collaborate in games and games didn’t always provide that functionality. I remember people complaining about Teamspeak and Ventrilo. I briefly poked at Mumble – nice to have an open-source option – but I just had no reason to want to do VoIP with groups of people.

But I suppose for a video game clan or something, that might be important functionality. And if it’s also a one-stop shop for some other things that you might want to do anyway, it maybe makes sense to just use that rather than multiple services.


If it makes economic sense to break them down for parts and locking doesn’t stop it, I suppose that it might make sense to introduce security holes in phone cases, with a chain that links to a belt or similar.

Bonus – if you’re going to have a chain anyway, can maybe run a cable along it and have attached battery or other phone peripherals elsewhere on you that don’t add to phone weight.


If I need to do an emergency boot from a USB stick to repair something that can’t boot, which it sounds like is what you’re after, pretty much any Linux distro will do. I’d probably rather have a single, mainstream bootable OS than a handful.

I’d use Debian, just because that’s what I use normally, so I’m most familiar with it. But it really doesn’t matter all that much.

And honestly, while having an emergency bootable medium with a functioning system can simplify things, if you’re familiar with the boot process, you very rarely actually need emergency boot media on a Linux system. You have a pretty flexible bootloader in grub, and the Linux kernel can run and be usable enough to fix things on a pretty broken system, if you pass something like init=/bin/sh to the kernel, maybe busybox instead for a really broken system, and can remount root read-write (mount -o rw,remount /) and know how to force syncs (echo s > /proc/sysrq-trigger) and reboots (echo b > /proc/sysrq-trigger).

I’ve killed ld.so and libc before and broght back systems without alternate boot media. The only time I think you’d likely really get into trouble truly requiring alternate boot media is (a) installing a new kernel that doesn’t work for some reason and removing all the old, working kernels before checking to see that your new one works, or (b) killing grub. Maybe if you hork up your partition table or root filesystem enough that grub can’t bring the kernel up, but in most of those cases, I’m not sure that you’re likely gonna be bringing things back up with rescue tools – you’re probably gonna need to reinstall your OS anyway.

EDIT: Well, okay, if you wipe the partition table, I guess that you might be able to find the beginning of a filesystem partition based on magic strings or something and either manually reconstruct the partition table or at least extract a copy of the filesystem to somewhere else.


Everyone left out there that ever thought they might give gaming a shot did so during the lockdown, and they either stuck with it, or they realized it wasn’t a forever hobby for them

looks dubious

That seems like an overly-strong statement.

There’s a point where the whole world has access to video games. And we’re getting closer to that time. There are certainly limits on growth approaching. But I don’t think that we’re to those limits yet.

For mobile phones in sub-Saharan Africa:

https://www.gsma.com/solutions-and-impact/connectivity-for-good/mobile-economy/sub-saharan-africa/

unique mobile subscribers in 2023, indicating a 43% penetration rate

That’s not even smartphones. And even smartphones can only run certain types of video games. There’s a lot of the world that still is constrained by limited development.


Internet Archive creates digital copies of print books and posts those copies on its website where users may access them in full, for free, in a service it calls the “Free Digital Library.” Other than a period in 2020, Internet Archive has maintained a one-to-one owned-to-loaned ratio for its digital books: Initially, it allowed only as many concurrent “checkouts” of a digital book as it has physical copies in its possession. Subsequently, Internet Archive expanded its Free Digital Library to include other libraries, thereby counting the number of physical copies of a book possessed by those libraries toward the total number of digital copies it makes available at any given time.

This appeal presents the following question: Is it “fair use” for a nonprofit organization to scan copyright-protected print books in their entirety, and distribute those digital copies online, in full, for free, subject to a one-to-one owned-to-loaned ratio between its print copies and the digital copies it makes available at any given time, all without authorization from the copyright-holding publishers or authors? Applying the relevant provisions of the Copyright Act as well as binding Supreme Court and Second Circuit precedent, we conclude the answer is no. We therefore AFFIRM.

Basically, there isn’t an intrinsic right under US fair use doctrine to take a print book, scan it, and then lend digital copies of the print book.

My impression, from what little I’ve read in the past on this, is that this was probably going to be the expected outcome.

And while I haven’t closely-monitored the case, and there are probably precedent issues that are interesting for various parties, my gut reaction is that I kind of wish that archive.org weren’t doing these fights. The problem I have is that they’re basically an indispensible, one-of-a-kind resource for recording the state of webpages at some point in time via their Wayback Machine service. They are pretty widely used as the way to cite a page on the Web.

What I worry about is that they’re going to get into some huge fight over copyright on some not-directly-related issue, like print books or something, and then someone is going to sue them and get a ton of damages and it’s going to wipe out that other, critical aspect of their operations…like, some random publisher will get ownership of archive.org and all of their data and logs and services and whatnot.


considers

You could probably do these automatically, given an automated loom – one of our first forms of programmable industrial hardware – and a chip layout description.

kagis

Here’s an inexpensive computer-controlled loom for $10k-$15k:

https://www.camillavalleyfarm.com/weave/weavebird.htm

I assume that the same design could be scaled up with larger motors and parts, worst case, so that probably puts a ceiling on about what it’d cost to do this automatically.


Once again, humans were taking robot jobs. Robots won’t stand for this sort of thing!


released for public testing

I mean, it’s not publicly-available either; it’s just available to a select group of testers.

I haven’t been following the game’s development. But my guess is that the devs are going to prioritize targeting the machines that they’re using to do development of the thing. They won’t be using a Deck to develop the thing. This probably won’t be the only tradeoff made, either – I’d guess that performance optimizations aimed at the Deck or other lower-end machines might be something that would be further down on the list. I’d guess that any kind of tutorial or whatever probably won’t go in until late in the development – not that that’s not important to bring new users up to speed, but it’s just not something that the devs need to work on it. Probably not an issue for this game, which looks like it’s multiplayer, but I’d guess that breaking save or progress compatibility is something that they’d be fine with. That’s frustrating for a player, but it can make development a lot easier.

Doesn’t mean that those don’t matter, just that they won’t be top of the priority list to get working. What they’re gonna prioritize is stuff that unblocks other things that they need.

I worked on a product in the past that had a more “customer-friendly” interface and a command line interface. When a feature gets implemented, the first thing that a dev puts in is the CLI support – it’s low-effort, and it’s all that the dev needs to get the internal feature into a testable state for the internal people. The more-customer-friendly stuff, documentation, etc all happens later in the development cycle. Doesn’t mean that we didn’t care about getting that out, just that we didn’t need it to unblock other parts of the the development process. Sometimes we’d give access to development builds to customers who specifically urgently needed a feature early-on and were willing to accept the drawbacks of using stuff that just isn’t done, but they’re inevitably gonna be getting something that’s only half-baked.

I mean, if it bugs you, I’d just wait. Like, they aren’t gonna be trying to provide an ideal customer experience at this point in the development cycle. They’re just gonna want to be using it as a testbed to see what works. It’s gonna inevitably be a subpar experience in various ways for users. The folks who are using the thing at this point are volunteering to do unpaid testing work in exchange for getting to play the thing very early and maybe doing so at a point where they can still alter the gameplay substantially. There are some people who really enjoy that, but depends on the person. It’s not really my cup of tea. I dunno about you, but I’ve got a Steam games backlog that goes on forever; it’s not like I’ve got a lack of finished games to get through.


released

I mean, it’s not released.

https://store.steampowered.com/app/1422450/Deadlock/

About This Game

EARLY DEVELOPMENT BUILD

Deadlock is still in early development stages with lots of temporary art and experimental gameplay.

LIMITED ACCESS

Access to Deadlock is currently limited to friend invites via our playtesters.

It’s not even Early Access.

Like, if you want to play it at this point, you’re gonna get something that isn’t done. It’s hopefully playable, but…shrugs


I haven’t played it, but it sounds like the situation may be in flux:

https://www.oneesports.gg/gaming/does-deadlock-have-controller-support/

At the time of writing, the action game is in closed beta, and it doesn’t offer native controller support. However, it does have an option that players can use to play the game with a controller.

With that in mind, the game is likely to feature controller support when it releases on PC, as it is expected to be Steam Deck compatible.

However, you must keep in mind that since the game is still in early development, it doesn’t offer any key binding or customization feature.

Additionally, even with a controller on default settings, some key actions in the game may not be mapped, so you might encounter limitations during gameplay.

In the near term, if a keyboard can do what you want, if you can dig up macro software for your platform that can look for specific gamepad combinations and send keystrokes as a result, I imagine that you could make it work that way.


CIFS supports leases. That is, hosts will try to ask for exclusive access to a file, so that they can assume that it hasn’t changed.

IIRC sshfs just doesn’t care much about cache coherency across hosts and just kind of assumes that things haven’t changed underfoot, uses a timer to expire the cache.

considers

Honestly, with inotify, it’d probably be possible to make a newer sshfs that does support leases.

I suspect that the Unixy thing to do is to use NFSv4 which also does cache coherency correctly.

It is easy to deploy sshfs, though, so I do appreciate why people use it; I do so myself.

kagis to see if anyone has benchmarks

https://blog.ja-ke.tech/2019/08/27/nas-performance-sshfs-nfs-smb.html

Here are some 2019 benchmarks that show NFSv4 to generally be the most-performant.

The really obnoxious thing about NFSv4, IMHO, is that ssh is pretty trivial to set up, and sshfs just requires a working ssh connection and sshfs software installed, whereas if you want secure NFSv4, you need to set up Kerberos. Setting up Kerberos is a pain. It’s great for large organizations, but for “I have three computers that I want to make talk together”, it’s just overkill.


I don’t know how hd-idle stores its data, but sysstat and some other utilities will log I/O on a per-device data and there are utilities that can graph it.


The company with the poorly-made batteries in question, DCS, appears to be “Deep Cycle Systems”.


I don’t use any myself, but my first search turns up a couple options:

Or if you know better place to ask (other than /r/…), I’d be glad.

This is on Reddit, but it has people talking about just buying new mouse switches for arbitrary mice and resoldering them.

https://old.reddit.com/r/MouseReview/comments/11ay2sq/silent_mouse_switches/

I’m not familiar with this, but it sounds like it’s practical to just take a mouse, buy new mouse switches, swap them out, as long as you can solder. So if you’ve got another mouse that does what you want – and it sounds like you like the Logitech G305 – you can probably just modify it, if you’re willing to put in the effort. I’d read up on this further before going that route.

This post specifically deals with replacing the switches on a Logitech G305 to make it silent:

https://old.reddit.com/r/MouseReview/comments/vi1z4p/how_to_make_a_g305_silent_this_works_for_other/

Additionally, it looks like silent mice are a thing, so you’ve probably got a number of options out there if you want off-the-shelf.


Could barely sleep, literally heard it in my dreams.

I do think that there’s an argument that maybe apartment buildings should be required to list some kind of sound isolation metrics.


This is not social media.

I hate to break it to you, but Reddit and similar fall under the category of social media.

https://en.wikipedia.org/wiki/Lemmy_(social_network)

Lemmy is made up of a network of individual installations of the Lemmy software that can intercommunicate. This departs from the centralized, monolithic structure of other social media platforms.[9] It has been described as a federated alternative to Reddit.[10]


but all social media is like this.

I’m not a bot, and we’re talking on social media.


Not having mandatory security is a legit issue, but there isn’t a drop-in replacement that does, not in 2024. You’re gonna need widespread support, support for file transfer, federated operation, resistance to abuse, client software on many platforms, etc.

And email security is way down the list of things that I’d be concerned about. At least with email, you’ve got PGP-based security. If you’re worried about other people’s mail providers attacking mail you send them, that’s getting into “do you trust certificate authorities to grant certificates” territory, because most secure protocols are dependent upon trusting that.

Like, XMPP with OTR is maybe a real option for messaging, but that’s not email.

EDIT: Not to mention that XMPP doesn’t mandate security either.


No PGP support

Why would the mail provider need to support it? I mean, if they provide some sort of webmail client, maybe it doesn’t do PGP, but I sure wouldn’t be giving them my PGP keys anyway.

I haven’t used any of them, but I don’t think that you can go too far wrong here, since you have your own domain. Pick one, try it for non-critical stuff for a month or two, and if you don’t like it, switch. As long as you own the domain, you’re not locked in. If you do like it, then just start migrating.

The main differentiating factors I can think of are (a) service reliability, (b) risk that someone breaks in and dumps client mail, but it’s hard for me to evaluate the risk of that at a given place. And © how likely it is that other parties spam-block mail from them.

I’d look for TLS support for SMTP and IMAP; that may be the norm these days. The TLS situation for mail is a little unusual compared to most protocols, where on a new connection, some servers initially use the non-encrypted version and then upgrade via STARTTLS.

If you intend to leave your mail on their server rather than just using it as a temporary holding point until you fetch it, you might look into what their storage provided is.

I’d also see what the maximum size of any individual email that they permit is.


I would advise against this.

I am all about running things yourself, run most stuff myself, but email is just a nightmare these days with all the anti-spam stuff out there.

Go ask at !selfhosted@lemmy.world. They’ll tell you the same thing. Lots of hassle, lots of potential pitfalls.


I once worked on a product that you really did not want to have not coming back up. I was on it several years after the original engineers had designed an early model. Said engineers had not tested what happened when the CMOS battery died and triggered a reset of BIOS settings, brought it back to the hardware platform’s default state. When it did, the thing entered a non-bootable state. You could, with a serial port, access the BIOS and fiddle the settings back for one good boot…but the CMOS battery was non-removable, soldered to the motherboard. Our manufacturing process had not involved changing the default BIOS settings, just what was stored in CMOS. Oops.

IIRC our customer care guys just sent out new models for free to affected customers – the original hardware model wasn’t sold in large volume, and the cost of the actual hardware components wasn’t especially large relative to the cost of the product.

I had one sitting around on my desk, as it was sometimes handy to have a physically-accessible device to do work on. I rolled down to Radio Shack – yes, this was a few years back – got a removable CMOS battery case, stripped the non-removable battery out, soldered the battery case to the motherboard, and had the only instance of the device out there that could take a fresh CMOS battery.


there don’t seem to be that many on Steam that catch my interest.

I don’t know the situation on consoles, but on the PC…

I am not a pinball expert, though I do enjoy video pinball, but none of these are what I’d call the major PC pinball engines with reasonably-realistic physics, things that do a lot of tables. Look at these:

  • Visual Pinball. I was not able to get this working on Linux the few times I’ve tried or to successfully get access to the forums that distribute tables (some kind of broken registration system). This is, as I understand it, what a typical person uses if they just want to make and distribute a free table. It also has many bootleg implementations of commercial tables. Open-source Source-available, though only runs natively on Windows.

  • Pinball Arcade. IIRC, these guys used to have a license for some major physical table distributors, like Williams, and had it expire. I have this, and the engine hasn’t been updated in some time. I run a high-refresh-rate monitor, and IIRC it has a limit of 60Hz, probably because the physics engine also runs at that rate. I don’t think that it’s getting a lot of updates, and I had some trouble running it last time I tried. This would not be my recommended engine unless it’s the only place to get a table that you specifically want.

  • Zaccaria Pinball. Good if you want elderly pinball, pre-solid-state-electronics era, electromechanical pinball tables. They have some tables that they developed, not copies of real-world tables, that I personally like more than their real-world tables. They don’t have implementations of real-world tables for some major popular US manufacturers.

  • Pinball FX3 (less old than Pinball Arcade). Not bad, but replaced by the below Pinball FX.

  • Pinball FX (despite the name, newer). This is the only one off the top of my head that can do high-refresh-rate, and it’s also being kept current. It has a lot of stuff that I’d call fluff and would rather not have, like toys that animate more than on the real-world tables and sometimes obstruct your view, animations to wait through, and such. Also has some kind of online-DRM system that takes a sec at startup. Some of this can be turned off. Places a lot of emphasis on this virtual pinball basement full of virtual trophies. Has occasional very brief stutters for me. Many of the non-real-life board are wide, designed around a present-day portrait-orientation computer monitor, which feels weird but is more friendly to, say, a laptop with a fixed orientation monitor, though maybe not what you want if you’re going to set up a dedicated pinball computer with portrait-orientation monitor. Lots and lots of non-real-world licensed tables associated with movies and the like that I’m not really enthusiastic about; I would recommend trying those tables before buying them. This is probably what I’d look at if I were aiming to get one today, as the engine’s the newest.

I think that all of these let you download the engine and try out some basic play (IIRC Zaccaria has time-limited plays on tables that you don’t own, and Pinball FX has a rotating collection that you can try for free), so you can just install them and see what you like, but if you’re looking for a starting point with something reasonably modern and with a bunch of tables, these are probably where you want to look.

If you don’t have a strong preference as to tables and are also just feeling around for something to try, I personally like some classic real-life Williams tables, Medieval Madness and Tales of the Arabian Nights. Neither is too rough in terms of draining down the side channels, in my humble opinion. The Addams Family is also a popular table.

Note that if you haven’t touched video pinball for a long time – like, I played a few games in the late 1990s and then was away from it for a while), these engines also simulate nudging the machine and doing so is expected during play.

EDIT: If you’re willing to hit Reddit for information, /r/videopinball and /r/pinball exist; they were where I got some information back when. If not, there’s !pinball@lemmy.world – not a lot of life yet, but, hey, each additional person adds to it!

EDIT2: My understanding from past reading of said forums is that Visual Pinball is considered to have the best physics, but is fiddly to get working and get tables working on (and I don’t think that this was said from the standpoint of someone trying to run anything on Linux, just Windows).

EDIT3: I would also recommend not purchasing a great many tables unless you’re sure that you’re actually going to play them. Yes, you can buy the equivalent of multiple arcades full of virtual machines at one swoop thanks to modern technology, but…I have tables on all of the commercial engines here and personally find that I play a very small percentage of the tables that I have. Pinball, I think, benefits from becoming familiar with particular tables.


The reason that robots.txt generally worked was because nobody was trying to really leverage it against bot operators. I’m not sure that this might not just kill robots.txt. Historically, search engines wanted to index stuff and websites wanted to be indexed. Their interests were aligned, so the convention worked. This no longer holds if things like the Google-Reddit partnership become common.

Reddit can also try to detect and block crawlers; robots.txt isn’t the only tool in their toolbox.

Microsoft, unlike most companies, does actually have a technical counter that Reddit probably cannot stop, if it comes to that and Microsoft wants to do a “hostile index” of Reddit.

Microsoft’s browser, Edge, is used by a bunch of people, and Microsoft can probably rig it up to send content of Reddit pages requested by their browser’s users sufficient to build their index. Reddit can’t stop that without blocking Edge users. I expect that that’d probably be exploring a lot of unexplored legal territory under the laws of many countries. It also wouldn’t be as good as Google’s (I assume real-time) access to the comments, but they’d get to them.

Browsers do report the host-referrer, which would permit Reddit to detect that a given user has arrived from Bing and block them:

https://en.wikipedia.org/wiki/HTTP_referer

In HTTP, “Referer” (a misspelling of “Referrer”[1]) is an optional HTTP header field that identifies the address of the web page (i.e., the URI or IRI), from which the resource has been requested. By checking the referrer, the server providing the new web page can see where the request originated.

In the most common situation, this means that when a user clicks a hyperlink in a web browser, causing the browser to send a request to the server holding the destination web page, the request may include the Referer field, which indicates the last page the user was on (the one where they clicked the link).

Web sites and web servers log the content of the received Referer field to identify the web page from which the user followed a link, for promotional or statistical purposes.[2] This entails a loss of privacy for the user and may introduce a security risk.[3] To mitigate security risks, browsers have been steadily reducing the amount of information sent in Referer. As of March 2021, by default Chrome,[4] Chromium-based Edge, Firefox,[5] Safari[6] default to sending only the origin in cross-origin requests, stripping out everything but the domain name.

Reddit could block browsers with a host-referrer off bing.com, killing the ability of Bing to link to them. I don’t know if there’s a way for a linking site to ask a browser to not give or forge the host-referrer. For Edge users – not all Bing users – Microsoft could modify the browser to do so, forcing Reddit to decide whether to block all Edge users or not.


It has an engine that permits recording and “rewinding” gameplay, with a lot of interesting quirks, like elements that don’t rewind. Puzzle platformer based on that.

It was a fascinating thing technically, and the creator did a lot with that capability. But IMHO it’s not otherwise exceptional, like graphically or such.


I guessed in a previous comment that given their new partnership, Reddit is probably feeding their comment database to Google directly, which reduces load for both of them and permits Google to have real-time updates of the whole kit-and-kaboodle rather than polling individual pages. Both Google and Reddit are better-off doing that, and for Google it’d make sense for any site that’s large-enough and valuable enough to warrant putting forth any effort special-case to that site.

I know that Reddit built functionality for that before, used it for pushshift.io and I believe bots.

I doubt that Google is actually using Googlebot on Reddit at all today.

I would bet against either Google violating robots.txt or Reddit serving different robots.txt files to different clients (why? It’s just unnecessary complication).


I haven’t used Piper, but I do want to let you know that it may be a lot easier than you think. I have used TortoiseTTS, and there, you can just fed in a handful (like, four or so) short clips (maybe six seconds, arbitrary speech), and that’s adequate to let it do a reasonable facimile of the voice in the recordings. Like, it doesn’t involve long recording sessions speaking back pre-recorded speech, and you can even combine samples from different people to “mix” their voices. I grabbed a few clean short recordings from video of someone speaking, and that was sufficient. TortoiseTTS doesn’t even retain the model, rebuilds it from scratch from the samples you provided every time it renders voice (which is a testament to how little data it pulls in). It’s not on par with, say, the tremendous amount of work involved in creating a voice for Festival or similar. The “Option B” for Piper on the page I linked to has:

I have built usable voice models with as few as 50 samples of the target voice.

…which is more than the tiny handful that I was using on TortoiseTTS, but might open up a lot of options and provide control over what you’re hearing, especially if you have a voice that you really like.

But, okay. Say you decide that you want to go the post-text-to-speech transform route. Do you have any idea how you want to process them? The most-obvious things I can think of are:

  • Pitch-shifting, like if you want the voice to sound more feminine or masculine.

  • Tempo-shifting, like if you want the voice to speak more-quickly or more-slowly, but without altering the pitch.

Those are straightforward transforms that people do do on voice recordings; if you want a command-line tool that can do this in a pipeline, sox is a good choice that I’ve used in the past.

I can imagine that maybe you just want to apply some kind of effect to it (sounding like a robot in an echoy cave? Someone talking over an old radio? Shifting perceptual 3d position in space of the audio source?). There’s a Linux – I’m assuming, given your preference for a CLI, and the community, that this is a Linux environment – audio plugin system called LADSPA and a successor system called LV2. Most Linux audio software, including sox, can run these on audio streams.

You can maybe do automated removal of silent bits, if there are excessive pauses…sox has silence-removal functionality.

But most other things that I can think of that one might want to do to a voice, more-sophisticated stuff, like making it sound happy, say, or giving it a different accent or something…I think that it’s going to be a lot harder to do that after the text-to-speech phase rather than before.


Do you guys have any recommendation for a voice changer to process these audio files?

I’m not totally sure what you’re going for.

If you want to transform spoken audio to a different sort of voice, then that’s one problem.

But this Piper thing appears to be a text-to-speech software package, and I’d think that it’d be easier and provide a more-capable system to just obtain a different voice and re-generate the audio from the text, rather than generating the audio and then transforming it, unless I’m not getting what you’re going for.

Like, here’s a project – which I have not used – to generate Piper voices from audio samples of speech.


I haven’t hit that, but one thing that might help if you don’t like that – you might be able to set it up such that they only operate in your environment when chorded – like, when you hit multiple buttons at the same time. Like, only have “left click plus back” send “back” and “left click plus forward” send “forward”, or something akin to that.

These days, I use sway on Linux, which provides for a tiled desktop environment – the computer sets the size of windows, which are mostly fullscreen, and I don’t drag windows. But when I did, and before mice had the convention of using “back” and “forward” on Button 4 and Button 5, I really liked having the single-button-to-drag-anywhere functionality, though I never really found a use for the fifth button. If I were still using a non-tiled environment, I’d probably look into doing chording or something so that I could still do my “drag anywhere on the window” thing.


I don’t personally go down the wireless mouse route – in fact, in general, I’d rather not use wireless and especially Bluetooth devices, due to reliability, latency, security, needing-to-worry-about-battery-charge, and privacy (due to broadcasting a unique ID that any nearby cell phone will relay the position of to Google, Apple, or similar). But I’d say that aside from that, most of those are advantageous, and a lot of people out there don’t care (or don’t know about) wireless drawbacks, so for them, even those are a win.

The main complexity item I can think of is the buttons. Maybe back in the day, few set up Mouse Button #5 to be “drag window” in their window manager, as I did, so I could drag windows anywhere rather than on their titlebar. However, the browser “back” and “forward” functionality that I believe is the default in all desktop environments these days seems pretty easily-approachable.


I’m not planning to throw that watch away ever. So why would I be throwing my mouse or my keyboard away if it’s a fantastic-quality, well-designed, software-enabled mouse?

Because watch technology is mature and isn’t changing. Nobody’s making a better watch every few years.

That generally isn’t true of computer hardware.

In the 1980s, you had maybe a one or two button mouse with mechanical optical encoder rings turned by a ball that gummed up and would stick.

After that:

  • A third mouse button showed up

  • A scrollwheel showed up

  • Optical sensors showed up.

  • Better optical sensors showed up, with the ability to function on arbitrary surfaces and dejittering.

  • Polling rate improved

  • Mice got the ability to go to sleep if not being used.

  • More buttons showed up, with mice often having five or more buttons.

  • Tilt scrollwheels showed up

  • Wireless mice showed up

  • Better wireless protocols showed up

  • Optical sensor resolutions drastically increased

  • Weight decreased

  • Foot pads used less-friction-inducing material.

  • Several updates happened to track changing ports (on PC, serial, PS/2, USB-A, and probably soon USB-C).

  • The transparent mouse bodies that were initially-used on many optical mice (to show off the LED and that they were optical) went away as companies figured out that people did not want to have flashing red mice. (I was particularly annoyed by this, modded a trackball that used a translucent ball to use a near-infrared LED back in the day).

If wristwatches had improved like that over the past 40 years, you likely wouldn’t be keeping an older one either.

If you think that there isn’t going to be any more change in mice, okay, maybe you can try selling people on the same mouse for a long time. I’m skeptical.


Well, they give the rationale there too – that most webpages out there are, well, useless.

I think that the heuristic is mis-firing in this case. But…okay, let’s consider context.

I think that the first substantial forum system I used was probably Usenet. I used that at a period of time where there was considerably less stuff around on the Internet, and I had a fair amount of free time. Usenet was one of several canonical systems that “intro to the Internet” material would familiarize you with. You had, oh, let’s see. Gopher and Veronica. FTP and Archie. Finger. Telnet. VAX/VMS’s Phone, an interative chat program that could span VMS hosts (probably was some kind of Unix implementation too, dunno). IRC. Usenet. The Web (which was damned rudimentary at that point in time). I’d had prior familiarity with BBSes, so I knew that forums were a thing from that. There are maybe a few proprietary protocols in there too – I used Hotline, which was a Mac over-the-Internet forum-and-file-hosting system.

But there just weren’t all that many systems around back then. Usenet was one of the big ones, and it was very normal for people to learn how to use it, because it was one of a limited set of options.

So the reason I initially looked at and became accustomed to a forum system was because it was one of a very limited number of available systems back in the day.

Okay, what about today? When I go see a new forum system, I immediately say to myself “Ah hah! This is a forum system!” I immediately know what it is, roughly how it probably works, what one might do with it, its limitations and strengths and how to use it. Even though I have maybe never used that forum website before a second in my life, I have a ton of experience that provides me with a lot of context.

Let’s say that you don’t have a history of forum use. Never before in your life have you used an electronic forum. Someone says “you should check out this Reddit thing”. You look at it. To you, this thing doesn’t immediately “say” anything. You’ve got no context. It says “it’s the front page of the Internet”. What…does it do? What would one use it for? There’s no executive summary that you see. You don’t have a history of reading useful information on forums, so it’s not immediately obvious that this might have useful information.

Now, I’m not saying that you can’t still assess the thing as useful and figure it out. Lots of people have. But I’m saying that having it fail that initial test becomes a whole lot more-reasonable when you consider that missing context of what an electronic forum is coupled with the extremely short period of time that people give to a webpage and why. You’d figure that there would be some significant number of people who would glance at it, say “whatever”, and move on.

Facebook was really successful in growing its userbase (though I’ve never had an account and don’t want to). Why? Because, I think, it is immediately clear to someone why they’d use it. It’s because they have family and friends on the thing, and staying in touch with them is something that they want to do. The application is immediately clear. With Reddit or similar, it’s a bunch of pseudononymous users. People don’t use Reddit to keep in touch with family and friends, but to discuss interests. But if you’ve never had the experience of using a system that does that, it’s not immediately obvious what the problems are that the system solves for you.

I was talking with some French guy on here, few months back. He was talking about how American food is bad. He offered as an example how he went to an American section of a grocery store in France and got a box of Pop-tarts after hearing about how good they were. He and his girlfriend got a box and tried them. They were horrible, he said. He said that he threw them in the garbage, said “they should be banned”. I asked him whether he’d toasted them before eating them.

Now, is the guy stupid? No. I’m sure that he functions just fine in life. If you look at a box of Pop-tarts, it doesn’t tell you anywhere on the thing to toast them. The only clue that you might have that you should do so is in the bottom left corner, the thing says “toaster pastries”, but God only knows what that means, if you even read it. Maybe it means that they toasted them at the factory. We don’t have that problem, because we have cultural context going in – we had our parents toast them for us as a kid, and so the idea that someone wouldn’t know to toast one is very strange to us. The company doesn’t bother to put instructions on the box, because it’s so widespread in American culture that someone know how to prepare one. My point is just that a lot of times, there’s context required to understand something and if someone has that context available to them, it can be really easy to forget that someone else might not and for the same thing to not make sense to them.


I had a family member remark that they had tried to use Reddit, and it was “too busy-looking” and hard to understand, and they are in their 40s.

So, I remember reading something on website UI back when, where someone said that some high percentage of users basically will only allocate a relatively-low number of seconds to understanding a website, and if it doesn’t make sense to them in that period of time, they won’t use it. It’s a big reason why you want to make the bar to initial use as low as possible.

kagis

This isn’t what I was thinking of, but same idea:

https://www.nngroup.com/articles/how-long-do-users-stay-on-web-pages/

It’s clear from the chart that the first 10 seconds of the page visit are critical for users’ decision to stay or leave. The probability of leaving is very high during these first few seconds because users are extremely skeptical, having suffered countless poorly designed web pages in the past. People know that most web pages are useless, and they behave accordingly to avoid wasting more time than absolutely necessary on bad pages.

If the web page survives this first — extremely harsh — 10-second judgment, users will look around a bit. However, they’re still highly likely to leave during the subsequent 20 seconds of their visit. Only after people have stayed on a page for about 30 seconds does the curve become relatively flat. People continue to leave every second, but at a much slower rate than during the first 30 seconds.

So, if you can convince users to stay on your page for half a minute, there’s a fair chance that they’ll stay much longer — often 2 minutes or more, which is an eternity on the web.

So, roughly speaking, there are two cases here:

  • bad pages, which get the chop in a few seconds; and
  • good pages, which might be allocated a few minutes.

I’ve also seen both Lemmy and Mastodon criticized for the “select an initial home instance” decision, because the point is that that significantly increases that bar to use. Maybe it’d be better to at least provide some kind of sane default, like randomly-select among the non-special-interest top-N home instances geographically near the user.

Reddit (at least historically, don’t know if it’s different now) was somewhat-unusual in that they didn’t require someone to plonk in an email address to start using the thing. That’d presumably be part of the “get to bar to initial use low” bit.


We can also learn about alternative networks by perusing Gopher, Gemini, and other networks.

I think that Gopher is neat and have some nostalgia going on there, but it’s not what I would recommend to random people as a first stab if they just want a less-centralized experience.


So far, this thing doesn’t seem like a very impressive tablet. But Daylight is more a display company than a tablet company — and the display is pretty great.

Hmm.

kagis

It looks like you can get this RLCD stuff as a computer monitor, too.

https://www.sunvisiondisplay.com/product/The-rE-Monitor-Featuring-32-Color-RLCD-Technology

full color RLCD computer monitor A vacation for your eyes

With SVD’s groundbreaking reflective LCD technology, you get all the benefits of LCD monitors without all the headaches —zero blue light emissions, no backlight-related flickering, and brightness perfectly matched to your surroundings.

If laptop screens could be disconnected so that one could swap a display of different type in, I imagine that it’d be nice to swap one of these in if you wanted to work outside.


Steve Burke is saying that they’re right about to start their AMD Zen vs Intel benchmarks, and unless Intel releases some kind of explanation, as things stand, since they don’t have a known, definite non-problematic configuration for the Intel CPUs, they’re going to just have to do a “we do not currently recommend buying Intel processors”.

Honestly, what I’d do is just compare against Intel 12th gen chips. Like, yeah, that sucks, but that’s the last definitely-known-good chip that Intel’s put out. You can still buy them, and they’ll work on current motherboards, are LGA1700. Then caveat it with the “if Intel can confirm that some Raptor and Meteor Lake chips are not affected, we can revise this”.

On Passmark, a 14900 is about 46% faster than a 12900 on the multithreaded benchmark, and about 14% faster on the singlethreaded benchmark. It’s slow, but it’s also stable and available.


Thanks. It says that there are already browser plugins that use their database, so looks like there’s already a way on both the scraper and user ends to programmatically avoid link rot here.


I’d assume that Google’s value – as with other link-shortening companies – came from being able to add information tracking whenever someone clicked on that link.

If you mean customer value, might be formats where people had limited space to include links like traditional Twitter (which was originally 140 characters in a post, whereas URLs have no specification-mandated character limit).


It might be interesting to have a search engine or someone else who has built a massive list of links visible online generate unshortened forms now before Google shuts down the service.


What game genre would you like to see more entrants in?
This was something I started wondering about when I was reading a thread about *Star Citizen*, and about how space combat flight games were much less-common than they had been at one point, how fans of the genre were hungry for new entrants. Looking at this list: https://en.wikipedia.org/wiki/List_of_space_flight_simulation_games#Space_combat_games ...there really were far more games in the genre being released in the late 1990s and early 2000s than there have been recently. A similar sort of phenomenon occurred for World War II first-person shooters. https://en.wikipedia.org/wiki/List_of_World_War_II_video_games Back around the same time period, there was a glut of games in the genre, and they really have fallen off quite a bit. Whether it's a genre like these two, that hasn't seen many new entrants recently, or a genre that just never grew as much as you'd like, what genre would you like to see more of?