Fediverse is worse than Reddit. Mod abuse, admin abuse, disinformation, and people simping for literal terrorists.
The moderation issue on Reddit is also a problem that is already extremely prevalent here on Lemmy, which more or less shares the same system. If foreign actors like Russia decide to influence the Fediverse (which is already highly radicalized), they’d have a very easy time. The modlog even actually helps abusive mods more than the users who become target of mod abuse, since they can just scrub the evidence and provide bullshit mod action reasons that most people who read it will just take at face value without actually questioning them.
I dropped out with Origins, which was kind of painful because I really hate not finishing a series (Mass Effect in this case). But that of course was just the last straw after all the bullshit EA pulled and all the studios & franchises they destroyed over the decades. I don’t even think they could ever redeem themselves at this point, not that they even make any effort to do so anyway. EA is just a rotten company and the epitome of all that is wrong within the gaming industry.
I could see it useful if you need the LLM to explain something maybe? If you’re reading something in a language not native to your own, or just something that’s using quite complex language & writing, then it may be useful to just have a paragraph or sentence explained to you. Or maybe the book references something you’re not familiar with and can get a quick explanation by the LLM.
And multiple cases of sabotage, assassinations & attempted assassinations on NATO soil, along with all the disinformation campaigns. Russia is making a fool out of the West, which cannot find a proper response to this form of hybrid warfare. Everyone’s too scared of their nukes, allowing Russia to move all the red lines further and further. They should turn the whole Baltic Sea into a NATO lake and blockade the couple Russian ports.
It’s not telling me a secret, it’s telling me that I’m doing something wrong and that I need to use CRT shaders, which are both wrong presumptions made to make me click on the video to find out why. Whether to use a CRT filter or other things like scanlines is completely subjective and up to a users preferences. There’s nothing wrong with sharp pixels over blurry pixels.
I fear they’re so mentally castrated that the threshold for this is almost nonexistent. Being subservient slaves is ingrained into their culture and most would rather die in the meat grinder or through hunger than actually standing up to their regime. They’re all free to prove me and the world wrong though.
Speaking of naive… They already control the narrative. The majority of Russians do not even speak English and do not have anything but state TV to inform themselves. Cut them off entirely. No internet, no medications, no trade, nothing. Let them see how a world without the West truly looks like.
Want to organize a protest?
Protesting is already illegal. lmao And for that matter also useless.
The 360 & One pad look nearly the same to me in regards to their form. I have a 360 one still and I hate it because the dpad is the absolute worst garbage. It’s inaccurate as hell and gets tiring real quick. Really struggled with that in CrossCode. I also hate how only the Chinese pads seem to have hall sensors.
There’s plenty of free ways to use LLMs, including having the models run locally on your computer instead of an online service, which vary greatly in quality and privacy. There’s some limited free ones too, but imo they’re all shit and extremely stupid, in the literal sense - you get even better results with a small model on your computer. They can be fun, especially if they work well, but the magic kinda goes away when you understand more how they actually work, which also makes all the little “mistakes” of them very obvious and that kind of kills the immersion and with that the fun of it.
A good chat can indeed feel pretty good if you’re lonely, but you kinda have to understand that they are not real, and that goes not just for potentially bad chats, but even for the good ones. An LLM is not a replacement for real people, nothing an LLM outputs is real. And yes, if you have issues with addictions, then you may want to keep your distance. I remember how people got addicted to regular chat rooms back in the early days of the world wide web, now imagine those people with a machine that can roleplay any scenario you want to play with it. If you don’t know your limits then that can be very bad indeed, even outside of taking them too seriously.
I can generally only advice to just not take them seriously. They’re tools for entertainment, toys. Nothing more, nothing less.
The bots pose as whatever the creator wants them to pose at. People can create character cards for various platforms such as this one and the LLM with try to behave according to the contextualized description of their provided character card. Some people create “therapists” and so the LLM will write like they’re a therapist. And unless the character card specifically says that they’re a chatbot / LLM / computer / “AI” / whatever they won’t say otherwise, because they don’t have any sort of self awareness of what they actually are, they just do text prediction based on the input they’ve been fed (though. It’s not really character.ai or any other LLM service or creator can really change, because this is fundamentally how LLMs work.
You’ve called? /J
The issue with LLMs is that they say what’s expected of them based on the context they’ve been fed on. If you’re opening up your vulnerabilities to an LLM, it can act in all kinds of ways, but once they’re sort of set on a course they don’t really sway away from it unless you force it to. If you don’t know how they work and how to do that, or maybe you’re self loathing to a point where you don’t want to, it will kick you further while you’re already down. As a user you kinda gaslight them into whatever behavior you want from them, and then they just follow along with that. I can definitely see how that can be dangerous for those who are already in a dark place, even more so if they maybe don’t understand the concept behind them, taking the output more serious than they should.
Unfortunately, various guards & safety measures tend to just censor LLMs to the point of becoming unusable, which drives people away from them towards those that are uncensored - and with them, everything goes, which again, requires enough knowledge and foresight to use them.
I can only advise to not take LLMs seriously. Treat them as a toy, as entertainment. They can be fun, stupid, vile, which also can be fun depending on your mindset… Just never let the output get to you on a personal level. Don’t use them for mental health or whatever either. No matter how good you may write them, no matter how well some chats may go, they’re not a replacement for a real therapy, just like they’re no replacement for a real friendship, or a real romantic relationship, or a real family.
THAT BEING SAID… I’m a little suspicious of the shown chat log. The suicide question seems to come very out of the blue and those bots tend to follow their contextualized settings very well. I doubt they’d bring that up without previous context from the chat, or maybe even this was a manual edit, which I’d assume is something character.ai supports - someone correct me if I’m wrong though. I wouldn’t be surprised if he added that line himself, already being suicidal, to have the chat steer towards that direction and force certain reactions out of the bot. I say this because those bots are usually not very creative in steering away from their existing context, like their character description and the previous chat log, making edits like this sometimes necessary to have them snap out of it.
The entire article also completely glosses over a very important part here: WHERE DID THE KID GET THE GUN FROM?! It’s like two pages long and only mentions that he shot himself at the beginning, with no further mention of it afterwards. Why did he have a gun? How did he get it? Was it his mother’s gun? Then why was it not locked away? This article seems to seek the fault with the LLM, rather than the parents who somehow failed to handle the situation of their sons mental health issues and somehow failed to oversee a gun in a household, or the country who failed to regulate its firearms properly.
I do agree that especially “AI” advertisement is very predatory though. I’ve seen some of those ads, specifically luring you with their “AI girlfriends”, which is definitely preying on lonely people, which are likely to have mental health issues already.
ggml_cuda_compute_forward: ADD failed
CUDA error: shared object initialization failed
current device: 0, in function ggml_cuda_compute_forward at ggml/src/ggml-cuda.cu:2365
err
ggml/src/ggml-cuda.cu:107: CUDA error
I didn’t do anything past using yay to install the AUR koboldcpp-hipblas package, and customtkinter, since the UI wouldn’t work otherwise. The koboldcpp-rocm page very specifically does not mention any other steps in the Arch section and the AUR page only mentions the UI issue.
That just means Vance will take over. The fascist enablers in this party are not going to just stop their attempt at a take-over just because Trump nopes out (which still remains to be seen).