One foot planted in “Yeehaw!” the other in “yuppie”.

  • 4 Posts
  • 72 Comments
Joined 1Y ago
cake
Cake day: Jun 11, 2023

help-circle
rss

Even on Windows, Proton drive is hot garbage. It never syncs my files correctly. Has a tendency to leave half encrypted uploads just lying around. Eating up desk space.

Don’t even get me started on how long it takes to upload anything. Got a 1 GB file? Good luck!

And that’s before getting into the fact that it’s proton’s third product. It was announced in 2019. 5 years and they still don’t have proton drive as a working product.

Another gripe I have is that the Linux VPN client still doesn’t support wireguard. Sure, you can download wireguard configuration files. And they work just fine. But changing servers is a pain in the ass because of it.

It’s made me seriously consider dropping my visionary plan and moving to a more competent provider.

That being said, proton mail has been fantastic. And I have a ton of domains on it. So it would be a pain to move. I guess I’m just in a stalemate.


Well, seeing that Insurgency: Sandstorm was on a sale, I just picked it up for him (and myself). Seems to have a lot in the map making scene, and that’s a really important factor for him.

It also helps that the prior Insurgency game has the most hours on his profile, by far. Gave me a good hint that he should enjoy this one.

Thanks so much!

EDIT: My dad just got back to me, and loves the gift. Apparently that’s where most of his online buddies went and still are. Nailed it!


My dad just got back online after years of boondocking it - he likes tactical shooters with a rich map making scene - what’s out there?
As in title, my father is an American nomad, and he just recently got a spot with good internet signal for a few months. He hasn't really played in years, and the last game he really enjoyed was Warface and Novalogic's Joint Operations: Combined Arms. There is a bit of a twist though, his vision certainly isn't what it used to be, so whatever game I suggest needs accessibility options galore. I found a really good "singleplayer only" experience in Ravenfield and the style lends itself very well to my father's limited vision. Is there something like Ravenfield but with a well supported online component? Perhaps Battlebit: Remastered is pretty close? EDIT: I suppose the genre is better described a "mil-sim" than "tactical shooter". UPDATE: Someone recommended the latest Insurgency game. After realizing my father had over 1K hours in the previous Insurgency game I realized that this was the game to get. Turns out it was a good choice! That's where most of my father's online buddies ended up! Thanks all! Feel free to keep recommending things, but we already seem to have a winner!
fedilink

Well that’s pretty compelling!

Ever since the failure of Windows mixed reality, there hasn’t been many non-Meta HMD’s worth buying. At least with inside out tracking.

Maybe this will finally pressure Valve to lower the price on the venerable Index? Probably not. But one can hope!


It’s just my client. Boost for Reddit doesn’t appear to show it correctly. It’s the opposite though. It just shows the text. Spoiling it for me.


yeah…

They asked for easy, or newbie friendly - and didn’t particularly mention privacy concerns.

Other than that, if they don’t have a port 80/433 ingress from their ISP there are scarce simple solutions that don’t require another server that also needs management, either by them or a corporate entity.

back when i was on a DOCSIS modem, i noticed concurrent downloads would disrupt uploads and vice versa. i think this may depend on the type of connection OP has.

I used to work at a cable company, that was either a problem that people with low SNR had. Either from external factors (tree branch on a cable line) or in-home ones (bad splitter). A modem will ramp up it’s gain in order to offset this (to a point), and in so doing, create a lot more interference between channels. OR they were hitting their ingress rate limit (which is quite agressive on residential plans because DDOS’es). It’s surprisingly easy to hit your ingress rate limit for modern http/https webservers hosting complex web apps. Lots of concurrent connections open up to try to download all the resources when you go to any website in a modern browser and while it’s not a TON of data, the short period of time causes the traffic to easily hit the PPS/BPS rate limit that ISPs employ.

But yeah, it all depends on the ISP.


I’d argue that the cloudflared daemon is even easier to use than a static wire guard or openvpn tunnel. It’s basically set and forget. The downside is that you must use cloudflare. This may, or may not be a big deal depending on OPs needs.

I moved from a place with symmetrical gigabit to “gigabit cable” with 30mbps upload, it definitely wasn’t good enough for my small family. Photos are quite large these days - not to mention videos. Though it likely has a lot more to do with the bandwidth shaping my ISP does than the 30mbps rate.

Also agree that it’s not perfect, but very likely the most newbie friendly solution at the moment. Especially from a deployment scenario vs going piecemeal.


The best “bang for the buck” in your use-case is to use Nextcloud - Nextcloud Talk is your Jitsi replacement, and the files feature can be extended with the Nextcloud Photos plugin (https://github.com/nextcloud/photos).

As for your domain question:

  1. You should use any computer you’d like that meets the Nextcloud recommendations, the key is of course isolating this machine on your home network so any “funny business” stays on the server. You can do this with VLANs or an entirely separate LAN connected to a different WAN (ISP).

  2. Many places, I like porkbun.com for real custom domains for cheap, but for your use case, you might be able to use a Dynamic DNS provider for free. It just likely won’t be an easy to remember URL (or at least, as easy as a root domain only). If you have a newer ASUS or Netgear router/modem they both have Dynamic DNS built in and you can select from a few different providers that have both free and paid tiers. ALSO it might be better to use Google Domains (now squarespace domains) since, IIRC, many DynDNS configs for routers support Google Domains too. Cloudflare can also be a decent registrar, and I’d recommend using them if you use any other cloudflare services (see below).

  3. Other things to consider: Your ISP may block port 80, meaning lots of issues. If this is the case, you might want to use a tunnel of some sort. Cloudflare has a great solution here. Even if they don’t block port 80, they may aggressively throttle and shape your incoming traffic - causing issues. Again, the tunnel is a good solution here. And, of course, your upload bandwidth matters a lot, you’ll need something around 100Mbps upload for a decent experience when accessing your stuff over the internet. The 30Mbps that’s typical of DOCSIS modems won’t cut it. Outside of these concerns it’s all about making sure you isolate your server from your “home stuff” to keep things secure.


With today’s announcement, I’m super happy you did this 4 days ago. Time to make a few clones myself.


I’ve tried it before, it’s fine but had issues running on wayland last I tried. Did they fix the wayland issues? Looking at the issue tracker it seems like there are still a few open Wayland issues.

kiTTY by contrast has had Wayland support for about as long as I’ve used it.


He did this thing where he unified his shell history across thousands of hosts - it was super handy given our extensive use of Ansible playbooks and database managment commands. He could then use a couple hotkeys to query this history within a new open document. Super handy for writing out shell command steps or wrapping things in a bash script you’re working on. Unfortunately I don’t really have a link to HOW to do this, I just remember thinking “Oh my god, that would save me SO much time”.

Nowadays, I just have this giant document with hundreds of our runbook commands and enable Github Copilot to make it SUPER easy to do the same thing without establishing an SSH session in the backend.


Eeeehhhh, I was kinda jealous of one of my coworkers Doom Emacs setup. He had automated like 80% of his own job with it. Still haven’t bothered to try to learn it myself. One of these days…


No kidding. One of the YouTubers I followed was really shilling Zed editor. He didn’t seem to mention that it was Mac only.

Well, I guess it’s back to neovim on kiTTY terminal for me.

Sometimes I swear Mac based developers think the world revolves around them.


Lucky! I wish I had symmetrical fiber with all the ports available.

I totally have a server capable of hosting a LOT of things but lack the upload to make use of it. I’m considering transferring to a rack mount and sending it to be colocated at a datacenter within driving distance.


You missed one:

ISP - Internet Service Provider


Eh, but then he won’t learn anything. I’ve never found that response acceptable. It just perpetuates the problem. To each their own though!


On a technical level, user count matters less than the user count and comment count of the instances you subscribe to. Too many subscriptions can overwhelm smaller instances and saturate a network from the perspective of Packets Per Second and your ISPs routing capacity - not to mention your router. Additionally, most ISPs block traffic traffic going to your house on Port 80 - so you’d likely need to put it behind a cloudflare tunnel for anything resembling reliability. Your ISP may be different and it’s always worth asking what restrictions they have on self-hosted services (non-business use-cases specifically). Otherwise going with your ISP’s business plan is likely a must. Outside of that, yes, you’ll need a beefy router or switch (or multiple) to handle the constant packets coming into your network.

Then there’s a security aspect. What happens if you’re site is breached in a way that an attacker gains remote execution? Did you make sure to isolate this network from the rest of your devices? If not, you’re in for a world of hurt.

These are all issues that are mitigated and easier to navigate on a VPS or cloud provider.

As for the non-technical issues:

There’s also the problem of moderation. What I mean by that is that, as a server owner you WILL end up needing to quarantine, report, and submit illegal images to the authorities. Even if you use a whitelist of only the most respectable instances. It might not happen soon, but it’s only a matter of time before your instance happens to be subscribed to a popular external community while it gets a nasty attack. Leaving you to deal with a stressful cleanup.

When you run this on a homelab on consumer hardware, it’s easier for certain government entities to claim that you were not performing your due diligence and may even be complicit in the content’s proliferation. Now, of course, proving such a thing is always the crux, but in my view I’d rather have my site running on things that look as official as possible. The closer it resembles what an actual business might do, the better I think I’d fare under a more targeted attack - from a legal/compliance standpoint.


“Your application” - the customers you mean. Our DB definitely does it’s own rate limiting and it emits rate limit warnings and errors as well. I didn’t say we advertised infinite IOPs that would be silly. We are totally aware of the scaling factors there and to date IOPs based scaling is rarely a Sev1 because of it. (Oh no p99 breached 8ms. Time to talk to Mr customer about scaling up soon)

The problem is that the resulting cluster is so performant that you could load in 100x the amount of data and not notice until the disk fills up. And since these are NVME drives on cloud infrastructure, they are $$$.

So usually what happens is that the customer fills up the disk arrays so fast that we can’t scale the volumes/cluster fast enough to avoid stop-writes let alone get feedback from the customer in time. And now that’s like the primary reason to get paged these days.

We generally catch gradual disk space increases from normal customer app usage. Those give us hours to respond and our alerts are well tuned. It’s the “Mr. Customer launched a new app and didn’t tell us, and now they’ve filled up the disks in 1 hour flat.” that I’m complaining about.


It is definitely an under provisioning problem. But that under provisioning problem is caused by the customers usually being very very stingy about what they are willing to spend. Also, to be clear, it isn’t buckling. It is doing exactly The thing it was designed to do. Which is to stop writes to the DB since there is no disk space left. And before this time, it’s constantly throwing warnings to the end user. Usually these customers tend to ignore those errors until they reach this stop writes state.

In fact, we just had to give an RCA to the c-suite detailing why we had not scaled a customer when we should have, but we have a paper trail of them refusing the pricing and refusing to engage.

We get the same errors, and we usually reach out via email to each of these customers to help project where their data is going and scale appropriately. More frequently though, they are adding data at such a fast clip that them not responding for 2 hours would lead them directly into the stop writes status.

This has led us to guessing what our customers are going to end up at. Oftentimes being completely wrong and eating to scale multiple times.

Workload spikes are the entire reason why our database technology exists. That’s the main thing we market ourselves as being able to handle (provided you gave the DB enough disk and the workload isn’t sustained for a long enough to fill the discs.)

There is definitely an automation problem. Unfortunately, this particular line of our managed services will not be able to be automated. We work with special customers, with special requirements, usually fortune 100 companies that have extensive change control processes. Custom security implementations. And sometimes even no access to their environment unless they flip a switch.

To me it just seems to all go back to management/c-suite trying to sell a fantasy version of our product and setting us up for failure.


That is exactly what we do. The problem is that as a managed service offering. It is on us to scale in response to these alerts.

I think people are misunderstanding my original post. When I say that customer cluster will go into stop writes, that does not mean it is not functional. It is an entirely intended function of the database so that no important data is lost or overwritten.

The problem is more organizational. It’s that we have a 5 minute SLA to respond to these types of events and that they can happen at any random customer impulse.

I don’t have a problem with customers that can correctly project their load and let us know in advance. Those are my favorite customers. But they’re not most of our customers.

As for automation. As I had exhaustedly detailed in another response, we do have another product that does this a lot better. And it’s the one that we are mass marketing a lot more. The one where I’m feeling all the pain is actually our enterprise level managed service offering. Which goes to customers that have “special requirements” and usually mean that they will never get as robust automation as the other product line.


Our database is actually pretty graceful. It just goes into stop writes status. You can still read any data and resolving the situation is as easy as scaling the cluster or removing old records. By no means is the database down or inoperable.

Essentially our database is working as designed. If we rate limited it further then we have less of a product to sell. The main feature we sell of our database technology is its IOPS and resiliency.

Further, this is just for a specific customer, it has no impact to any other customers or any sort of central orchestration. Generally speaking the stop writes status only ever impacts a single customer and their associated applications.

Also, customers can be very stingy with the clusters they are willing to buy. We actually are on poor terms of the couple of our customers who just refuse to scale and just expect us to magic their cluster into accepting more data than its sized for.


Probably not feasible in our case. We sell our DB tech based on the sheer IOPS it’s capable of. It already alerts the user if the write-cache is full or the replication cache is backing up too.

The problem is, at full tilt, a 9 node cluster can take on over 1GB/s in new data. This is fine if the customer is writing over old records and doesn’t require any new space. It’s just that it’s more common that Mr. customer added a new microservice and didn’t think through how much data it requires. Thus causing rapid increase in DB disk space or IOPs that the cluster wasn’t sized for.

We do have another product line in the works (we call it DBaaS) and that can autoscale because it’s based on clearly defined service levels and cluster specifications. I don’t think that product will have this problem.

It’s just these super mega special (read: big, important, fortune 100) companies have requirements that mean they need something more hand-crafted. Otherwise we’d have automated the toil by now.


As an SRE, what do I do about Alerts caused almost entirely by poor customer communication or misuse of a product?
A bit more context there since you might wonder why customers can cause Sev1's. Well, I work for a Database Technology company and we provide a managed service offering. This managed service offering has SLA's that essentially enforce a 5 minute response time for any "urgent" issue. Well, a common urgent issue is that the customer suddenly wants to load in a bunch of new data without informing us which causes the cluster to stop accepting write loads. It's to the point where _most_ if not all urgent pages result in some form of scaling of the cluster. Since this is a customer driven behavior, there is no real ability to plan for it - and since these particular customers have special requirements (and thus, less ability to automate scaling operations), I'm unsure if there is any recourse here. It's to the point that it doesn't even feel like an SRE team anymore - we should just instead be called "On-demand scaling agents". Since we're constantly trying to scale ahead of our customers. All in all, I'm starting to feel like this is a management/sales level issue that I cannot possibly address. If we're selling this managed service offering as essentially "magic" that can be scaled whenever they need then it seems like we're being setup for failure at the organizational level. Not to mention, not being smart about costs behind scaling and factoring that into these contracts. So, fellow SRE's have you had to have this conversation with a larger org? What works for something like this? What doesn't? Should I just seek greener pastures at this point? P.S. - Posted c/Programming due to lack of a c/SRE
fedilink

I agree. I think 1440p+HDR is probably the way to go for now. HDR is FAR more impactful than a 4K resolution and 1440p should provide a stable 45ish FPS on Cyberpunk 2077 completely maxed out on an RTX 3080Ti (DLSS Performance).

And in terms of CPU, the same applies. 16 cores are for the gentoo using, source compiling folks like me. 8 cores on a well binned CPU from the last 3 generations goes plenty fast for gaming. CPU bottlenecking only really show up at 144fps+ in most games anyways.


Agree, most mainstream distros have it all handled for the most part and it normally “just works”.

Now, myself on Gentoo testing on the other hand… Sometimes I shoot myself in the foot and forget to rebuild my kernel modules and wind up needing to chroot to fix things - all because I have an NVidia card.


I’m rolling with a Halfling Rogue, it’s always the class I try to play first in D&D games and versions. I’m also surprised at the lack of Halflings in the stats!

My wife is playing a Githyanki the only race that’s less popular. Lol!


I forget, do the pathfinder games use v1 or v2? V1 was basically D&D 3.5 in most ways.

Either way, if you haven’t played it, you should put Pathfinder Kingmaker on deck for after BG3


Forever GM’s unite! (But like, once we get the schedule finalized /s)

Yeah, this is probably the best D&D game in existence now. It definitely has some pretty fun mechanics and a lot of depth that other D&D vidya games just lack.



Mid-air 2 Is inching towards release.

https://store.steampowered.com/app/1231210/Midair_2/

I’ve got alpha access and can assure you it does a great job of bringing the good ol’ days of Tribes 2 back. Though there is a ways to go.


I dunno what this GM is doing but I find that ChatGPT (GPT4 particularly) does wonderfully as long as you clearly define what you are doing up front, and remember that context can “fall off” in longer threads.

Anyways, here’s a paraphrasing of my typical prompt template:

I am running a Table Top RPG game in the {{SYSTEM}} system, in the {{WORLD SETTING}} universe. Particularly set before|after|during {{WORLD SETTING DETAILED}}.

The players are a motley crew that include:

{{ LIST OF PLAYERS AND SHORT DESCRIPTIONS }}

The party is currently at {{ PLACE }} - {{ PLACE DETAILS }}

At present the party is/has {{ GAME CONTEXT / LAST GAMES SUMMARY }}

I need help with:

{{ DETAILED DESCRIPTION OF TASK FOR CHAT GPT }}

It can get pretty long, but it seems to do the trick for the first prompt - responses can be more conversational until it forgets details - which takes a while on GPT4.


Thank you for the measured take on this.

You are correct, I don’t intend to pressure or cause harm! But I certainly see the results, and it is indeed pressure. As another commenter pointed out, there are many instance admins who work a bit closer to the team on the Matrix chatrooms and that’s their preferred method of communication. Now that I know this, I’ll let things cool down and join myself. I definitely intend to contribute where I can in the codebase, and I wouldn’t dream of escalating to public pressure for smaller concerns.

However, I have a slight, and perhaps pedantic disagreement about making changes. In this case, the request was for not making a change. If it weren’t for the fact that the feature was already ripped out it would be as simple as not removing it (or in this case re-working it a bit). I understand that it isn’t the current reality, and that it required work to revert - and if not for a ton of spambots, I think It would’ve been easier to adapt.

Ultimately it will take time to discuss workarounds and help others implement them, and the deadline is ultimately the arrival of the version that drops the older captcha (or was, in this case - it’s getting merged back in as we speak - might even be done now). With that reality, I had a sense that this could be an existential problem for the early Threadiverse.

I definitely didn’t intend to suggest that the Devs were in any way at fault here. I read the github issues enough to come with the takeaway was that the feedback they were receiving seemed to be “Admins and devs alike are okay moving forward and opinions to the contrary are minimal, let’s move forward”. It was definitely intended to be a way to communicate using raw numbers (but not harassment). I’d like to think I’m fairly pragmatic in that if it IS working for folks, then that is a contrary opinion, and that it was missing.

Where I definitely failed was my overly emotional messaging. It’s certainly not an excuse, but my recent autism diagnosis does at least help explain why I have an extremely strong sense of justice and can sometimes react in ways that are less than productive in some ways.

As for the licensing, I agree! I’m talking to some good friends of mine because I want to take my instance WAY further than most others - goal is a non-profit that answers to Tucsonans and residents of larger Pima county rather than someone not in the community. There’s just a lot of features this concept would need that it might diverge so much from the Lemmy vision that it needs to be something new - and hopefully a template for hyper-local social networks that can take on Nextdoor.


Interesting, I definitely see mine. I’m wayyyyyy at the bottom of the popular section, (likely due to the 9 bots that added themselves before I banned the accounts.).

I wonder if one of the settings in your firewall is blocking that particular bot?

I don’t recall when I would’ve done the same, but I do recall not being on join-lemmy until - well - now actually.


Oh! I just remembered something. Isn’t there a site that recommends a lemmy instance? Might it make sense that multiple users found your website because they change the recommendation to distribute new users to smaller instances (hourly perhaps)? Does that sort of pattern hold in this case?


5 huh? That’s actually noteable. So far I haven’t seen a real human user take longer than a couple of hours to validate. Human registrations on my instance seem to have a 30% attrition. That is, of 10 real human users, I can reasonably expect that 3 won’t complete the flow. It seems like your case might be nearing 40-50% which isn’t unheard of but couple this with the quickness that these accounts were created - I think you are looking at bots.

The kicker is, though, if one of them IS a real user, it’s going to be almost impossible to find out.

This is indeed getting more sophisticated.

I wish I could see this time period on a cloudflare security dashboard, I’m sure there could be a few more indicators there.


Huh, that is interesting, yeah, that pattern is very anomalous. If you have DB access you can try to run this query to return all un-verified users and see if you can identify if the email activations are being completed:

SELECT p.id, p.name, l.email FROM person AS p LEFT JOIN local_user AS l ON p.id=l.person_id WHERE p.local=true AND p.banned=false AND l.email_verified='f'


Not so sure on the LLM front, GPT4+Wolfram+Bing plugins seems to be a doozy of a combo. If anything there should be perhaps a couple interactable elements on the screen that need to be interacted with in a dynamic order that’s newly generated for each signup. Like perhaps “Select the bubble closest to the bottom of the page before clicking submit” on one signup and “Check the box that’s the furthest to the right before clicking submit”?

Just spitballin it there.

As for the category on email address - certainly not suggesting they remove supporting it, buuuuutttt if we’re all about making sure 1 user = 1 email address, then perhaps we should make the duplication check a bit more robust to account for these types of emails. After all someuser+lemmy@somedomain.com is the same as someuser@somedomain.com but the validation doesn’t see that. Maybe it should?


The language of your post was quite hostile and painted (and continues to paint) the developers as being out of touch with instance admins. The instance admins are already “loud, clear and coordinated”, and are working in full communication with the maintainers.

Right now the instance admins that I’m working with are largely independent with only a couple of outliers. The newer instances that have just joined the fediverse didn’t really echo back their concerns. So while you’re statement might be true (I dunno, I don’t see any coordination, and it’s not always clear what admin concerns are important.) the rapid growth has brought even more stakeholders and admins to the fediverse. Some far less technical than others. I’m going to need more proof of deeper coordination, because as it stands many Admins say “Devs are tankies” and refuse to federate with the maintainer’s instance, let alone contribute code or money.

The majority of PR’s coming into the project are coming from instance admins seeking to solve their personal pain points. Both the issue and the PR you’re referring to were created by ruud…

This is a new phenomenon, the total lines of code written by the primary devs are still much larger than any other combination of PRs. I don’t envy the position of having to sort through thousands upon thousands of PRs that may or may not coincide to the project’s vision or code quality standards. Rolling back to a known prior state is almost always lower effort than minting a fresh new implementation.

Also, ruud did not create the PR I’m referring to, that honor goes to TKillFree. Heck, why do you think I’m attacking the author here rather than trying to bring more weight to his Github issue? It’s because of ruud that I even know what’s going on - and the instance admins I know were pretty clueless about the pending change.

I’ll grant you that my tone and signalling needs work, but I do think that an attempt to rally more folks did indeed influence the solutions that the maintainers were willing to accept. From “New, better implementation only - remove the existing flawed one now” to “Okay we can keep the flawed method, but we need an enhanced version and soon”.

At this point its hard to tell because we don’t live in a universe where I didn’t make that post to compare. Maybe you’re right and this would’ve all shaken out eventually.


Hmmm, I’d check the following:

  1. Do the emails follow a pattern? (randouser####@commondomain.com)
  2. Did the emails actually validate, or do you just not see bouncebacks? There is a DB field for this that admins can query (i’ll dig it up after I make this high level post)
  3. Did the surge come from the same IP? Multiple? Did it use something that doesn’t look like a browser?
  4. Did the surge traffic hit /signup or did it hit /api/v3/register exclusively?

With those answers I should be able to tell if it’s the same or similar attacker getting more sophisticated.

Some patterns I noticed in the attacks I’ve received:

  1. it’s exactly 9 attempts every 30 minutes from the user agent “python/requests”
  2. The users that did not get an email bounceback were still not authenticated hours later (maybe the attacker lucked out with a real email that didn’t bounce back?). There was no effort to verify from what I could determine.

Some vulnerabilities I know that can be exploited and would expect to see next:

  1. ChatGPT is human enough sounding for the registration forms. I’ve got no idea why folks think this is the end-all solution when it could be faked just as easily.
  2. Duplicate Email conflicts can be bypassed by using a “+category” in your email. ie (someuser+lemmy@somedomain.com) This would allow someone to associate potentially hundreds of spam accounts with a single email.

I’m confused - that’s almost exactly what I said, albeit in a very condensed form.

Once you take a Discretionary bonus and then make it into an incentive (i.e. This year the Christmas bonus must be earned by doing X, Y, Z) and adding stipulations to the bonus that are tied to worker output turns it into a non-discretionary bonus.

Promissory Estoppel is the basis for why non-discretionary bonuses are a category. There is a perceived promise of a bonus that people work for, but then are denied which can cause knock-on effects for the people to whom that bonus is owed. A bonus is discretionary up until the point it’s used to get people to work longer or perform better.

Sure the general term is Promissory Estoppel, but that’s a much weaker regulatory framework than Pay and Labor laws around non-discretionary bonuses.

If there is something else I’m not understanding here please enlighten me further. If it’s not “accurate” I invite you to help me be more accurate.


Eh, this situation seems more like the “admins”/power users of the software saying “How can you not need us?” - and for them, that’s more of a point. These are the people who submit bug reports, code features or plugins on a weekend, and generally turn your one product into a rich ecosystem of interconnected experiences. One can argue that the project doesn’t technically require their participation, but they do enhance the project in many different ways.

open-source entitlement is a thing, but I’m not sure that this is the same thing. I for one would be happy to submit changes (and even have a couple brewing for my own use on my instance). Just don’t make the spam problem worse in the meantime by pushing out a version that’s missing a crucial (if imperfect) feature.


You won’t see me making call to action posts for undelivered features or other small-fry items. I’m a dev, I get it.

But there are always times were vulnerabilities come up and a dev might not otherwise know that it’s being exploited. It’s one thing to have a feature to fix that vulnerability and get to it as part of your own priority list. It’s another when that vulnerability is actively impacting the people using the software - that’s when getting vocal about an issue is appropriate to help me alter my priorities, IMO.


Admins, we’re about to have a really bad SPAM problem when Lemmy removes captcha support in v.0.18 - You ALL have a responsibility to communicate back to lemmy devs to try to stop it.
Look, we can debate the proper and private way to do Captchas all day, but if we remove the existing implementation we will be plunged into a world of hurt. I run tucson.social - a tiny instance with barely any users and I find myself really ticked off at other Admin's abdication of duty when it comes to engaging with the developers. For all the Fediverse discussion on this, where are the github issue comments? Where is our attempt to convince the devs in this. No, seriously WHERE ARE THEY? Oh, you think that just because an "Issue" exists to bring back Captchas is the best you can do? NO it is not the best we can do, we need to be applying some pressure to the developers here and that requires EVERYONE to do their part. The Devs can't make Lemmy an awesome place for us if us admins refuse to meaningfully engage with the project and provide feedback on crucial things like this. So are you an admin? If so, we need more comments here: https://github.com/LemmyNet/lemmy/issues/3200 We need to make it VERY clear that Captcha is required before v0.18's release. Not after when we'll all be scrambling... EDIT: To be clear I'm talking to all instance admins, not just Beehaw's. UPDATE: Our voices were heard! https://github.com/LemmyNet/lemmy/issues/3200#issuecomment-1600505757 The important part was that this was a decision to re-implement the old (if imperfect) solution in time for the upcoming release. mCaptcha and better techs are indeed the better solution, but at least we won't make ourselves more vulnerable at this critical juncture.
fedilink

Google sunsets Domains business and shovels it off to Squarespace
Damn it! Now I have to move all my domains.
fedilink