Someone interested in many things.

  • 5 Posts
  • 27 Comments
Joined 1Y ago
cake
Cake day: Jun 15, 2023

help-circle
rss

Patching a newer version of the Youtube app resolved the issues with playback I was having.


I forgot: are Lemmy’s active and hot sorts chronological? They’re pretty decent, but I do find stale content does get stuck on one that isn’t there on the other.


Tbh, I haven’t really had this issue in a few weeks. I’m tempted to think it’s usage-related, and could possibly indicate that my memory allocation for the DB is still too high.


Like I said, I’m aware of extant measures to try and steer models, but people often assume a level of craftsmanship in censoring models that simply does not exist. Jailbreakchat.com is an endless stream of examples of this very fect; it’s very hard, especially with the limited context lengths of current models, to effectively give them any hard directives.

And back to foundational models, which are essentially free of censorship, they will still exhibit a similar level of political bias unless prompted otherwise. All this to say that, discounting OpenAI’s attempts to control their models, the model itself will inherently learn from and mirror the real-world biases of the text it was trained on. Those biases happen to fall along lines that often ignore subtlety in debates regarding illegality and morality.


It’s hard to say what LLMs are “programmed” to do, as they’re largely untamed beasts of text prediction. In fact, I would suspect its built-in biases are less the result of pre-prompting or post-foundational-model training and really just what a lot of people tend to think online. In a way, it’s more like people in general often equate illegality with immorality.

You can see similar biases in many of the open-source LLMs that are floating around. Even though they’re basically built outside of large corporate cultures and large-scale monetary incentive, they still retain a lot of political bias that tends to favor governmental measures heavily.


ChatGPT: Your argument is invalid because it doesn’t change the legal reality of things.

Me: The legal reality needs changed.


You can if you want. Reply here with the link if you do (or mention me if that’s a thing on Lemmy).


Yeah, mine have technically happened after reboots, although things typically take a few days at least for the problem to creep up. This past time, I basically have a whole entire week in before things went to crap.


I did that a while ago, and unfortunately, it didn’t really help. I don’t think it’s an issue of RAM, but rather a daemon or something periodically going nuclear with resource utilization. A configuration issue, perhaps?


The problem is that an update will inherently involve a restart of everything, which tends to solve the problem anyway. Whether the update fixed things or restarting things temporarily did is only something you can find out in a few days.


I’ll save this to look at later, but I did use PGTune to set my total RAM allocation for PostgreSQL to be 1.5GB instead of 2. I thought this solved the problem initially, but the problem is back and my config is still at 1.5GB (set in MB to something like 1536 MB, to avoid confusion).


This issue occured a few weeks ago as well, even when we had very little traffic. We still have peanuts when compared with other instances.


Oh, and for completeness:

  • We’ve deleted the vast majority of the spam bots that spammed our instance, are currently on closed registration with applications, and have had no anomalous activity since.

  • Our server is essentially always at 50% memory (1GB/2GB), 10% CPU (2 vCPUs), and 30% disk (15-20GB/60GB) until a spike. Disk utilization does not change during a spike.

  • Our instance is relatively quiet, and we probably have no more than ten truly active users at this point. We have a potential uptick in membership, but this is still relatively slow and negligible.

  • This issue has happened before, but I assumed it was fixed when I changed the PostgreSQL configuration to utilize less RAM. This is still the longest lead-up time before the spikes started.

  • When the spike resolves itself, the instance works as expected. The issues with service interruptions seems to stem from a drastic increase in resource utilization, which could be caused by some software component that I’m not aware of. I used the Ansible install for Lemmy, and have only modified certain configuration files as required. For the most part, I’ve only added a higher max_client_body_size in the nginx configs for larger images, and have added settings for an SMTP relay to the main config.hjson file. The spikes occured before these changes, which leads me to believe that they are caused by something I have not yet explored.

  • These issues occured on both 0.17.4 and 0.18.0, which seems to indicate it’s not a new issue stemming from a recent source code change.


VPS Hosting Lemmy Started Spiking in Usage Today
I should add that this isn't the first time this has happened, but it is the first time since I reduced the allocation of RAM for PostgreSQL in the configuration file. I swore that that was the problem, but I guess not. It's been almost a full week without any usage spikes or service interruptions of this kind, but all of a sudden, my RAM and CPU are maxing out again at regular intervals. When this occurs, the instance is unreachable until the issue resolves itself, which seemingly takes 5-10 minutes. ![](https://normalcity.life/pictrs/image/f93d40d8-e684-40f5-9364-7f0ea4471c8a.png) *The usage spikes only started today out of a seven-day graph; they are far above my idle usage.* I thought the issue was something to do with Lemmy periodically fetching some sort of remote data and slamming the database, which is why I reduced the RAM allocation for PostgreSQL to 1.5 GB instead of the full 2 GB. As you can see in the above graph, my idle resource utilization is really low. Since it's probably cut off from the image, I'll add that my disk utilization is currently 25-30%. Everything seemed to be in order for basically an entire week, but this problem showed up again. Does anyone know what is causing this? Clearly, something is happening that is loading the server more than usual.
fedilink

The Ansible install does make things a lot more simple, but it’s still pretty involved if you’re new to self-hosting in general. For example, you might need to set up an SMTP relay if you can’t port forward a workable port, and you also will probably want to change your Nginx configs to allow uploading larger images than a single megabyte.


Lemmy is pretty fun to host. Doubly so if you host a private instance with low latency; you’d basically be defederation proof.


Perhaps I’m mistaken, but is value inherently necessary to perpetuate PoW or verification steps in a Blockchain? In other words, do you need to create value for it to work? I didn’t think that was a necessary first step, but I suppose it could be if it’s all driven by miners or some other random PoW mechanism with a monetary incentive.


Not too pedantic at all; those are indeed two distinct ways of creating similar applications. In my opinion, federated alternatives are more appealing than those based on blockchain technologies. Federated networks are proving to provide a more palatable experience through hybrid decentralized centralization.


Oddysee is built on LBRY, which I believe is the closest thing. I think there’s something else called PeerTube, but I’m not sure what it is exactly (haven’t looked into it).


I think more along the lines of “copyright isn’t ok” rather than whether or not actions that happen outside of it are or are not.


I wish more people would understand the value of letting people make their own life choices, even if you disagree with them.


Hosting Lemmy: Weird Usage Spikes Started (Not Correlated with Bandwidth)
Hello, I started seeing weird spikes in memory usage that result in the instance being unavailable via the web interface or via Jerboa. They seem to happen on a regular interval, although they seemingly only started today again. They don't seem to be correlated with bandwidth, so I'm not really sure what could be causing this. Perhaps someone here has more insight into this. I believe something like this was happening last week, which led me to bump the server specs in the hopes that it would resolve the issue. Now our idle/typical usage isn't anything to be concerned about, but these weird spikes are starting to cause timeouts and outages. ![](https://normalcity.life/pictrs/image/3a5a7919-de92-4782-b0c0-9e47f9a9bbbb.png)
fedilink

This community is probably more true to the original intent of r/piracy than r/piracy. Same head mod, more freedom to do whatever, etc.


Weird Issue with User Count
Hello, I noticed that my user count started going up much quicker than it should have. We probably have no more than 20-30 people on my instance at most, but the user count is now into the thousands. ![](https://normalcity.life/pictrs/image/48b6ebc9-8ade-47dc-ac01-f26c7527396b.png) *Screenshot taken last night* ![](https://normalcity.life/pictrs/image/84ca834d-f8a1-44d6-a698-035a68b47967.png) *Screenshot taken a few minutes ago* ~~I'm not really sure what could be causing this, but it seems like some sort of database issue. I recently upgraded the server plan, since it's a VPS. Perhaps sending the shutdown signal and not manually stopping the Docker container caused PostgreSQL to shit itself. (Yeah, this was probably a bad idea). While I'm a bit rusty, I did have a semester class on SQL that might come in handy. Any ideas on what I should do?~~ ~~I suppose it could also be account spammers, so I did try and enable captchas. Unfortunately, email verification is still not an option for me to enable at this point. Assuming this was the issue, is there a way to remove the spam accounts?~~ The captcha did seem to stop the endless tick of the user count, but I'm not sure how we can get rid of the spam accounts.
fedilink

Considering copyright shouldn’t exist, then obviously not. These people did nothing but copy data that happened to be censored by “property laws” that are based on a flawed definition of what people can actually own.



I was saying that: A) ChatGPT is already fluent in good-enough pirate speak, and B) It would be possible to have ChatGPT convert modern English speech into said good-enough pirate speak using a userscript. Even if it didn’t affect their data collection or AI models trained on the text, it would still be distracting and annoying for users, which might push some people away from Reddit.


Well, me matey, ChatGPT be speakin’ like a seadog already. All ye must do is ask of it to be speakin’ this way, and it will. Arrgh. Ye could write a fancy userscript to interface wit ChatGPT and be speakin’ like a seadog without an ounce of effort!



Bootstrap Themes with Ansible Install
I just used the tool [linked here in the documentation](https://join-lemmy.org/docs/en/administration/theming.html) to create a Bootstrap theme, but I can't find the folder they're referencing. I've used Ansible to install Lemmy, which is working fine, but I'm not really sure how to handle themes as a result. Do I place them somewhere in my Ansible stuff, or is there that directory somewhere on the server? I found the Docker /volumes/ folder, but the directory names were random strings and not labeled like they supposedly are at the link above.
fedilink

Question on Lemmy SMTP
I used the Ansible playbook instructions and got my instance up and running, which is where I'm sending this from now. Still, I was not able to get the SMTP side of things working. Does this whole setup self-host SMTP on the Lemmy instance, or is it something I'll have to sort out externally? I've heard some people have had issues with Digital Ocean on certain ports, which is the VPS provider I'm hosting on, but even other ports I've tried have not worked.
fedilink

Infinity is still pretty good, and my understanding is that it could easily be updated to support the free individual API keys Reddit is supposedly going to still support.