Doing the Lord’s work in the Devil’s basement

  • 0 Posts
  • 14 Comments
Joined 6M ago
cake
Cake day: May 08, 2024

help-circle
rss

I’ve only had issues with fitgirl repacks i think there’s an optimisation they use for low RAM machines that doesn’t play well with proton


brushing your teeth will not fix your cavities so it is delusional to do it

Alright, buddy


If I understand these things correctly, the context window only affects how much text the model can “keep in mind” at any one time. It should not affect task performance outside of this factor.


Yeh, i did some looking up in the meantime and indeed you’re gonna have a context size issue. That’s why it’s only summarizing the last few thousand characters of the text, that’s the size of its attention.

There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.

If 16k isn’t enough for you then that’s probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.


There are not that many use cases where fine tuning a local model will yield significantly better task performance.

My advice would be to choose a model with a large context window and just throw in the prompt the whole text you want summarized (which is basically what a rag would do anyway).


I love that on Lemmy, people will trip over themselves to misinterpret simple, unambiguous comments such as yours.


If you like to write, I find that story boarding with stable diffusion is definitely an improvement. The quality of the images is what it is, but they can help you map out scenes and locations, and spot visual details and cues to include in your writing.



To clarify : We’re talking about differences in the codebase here. They are still exactly the same game, with some very minor disparities in certain mechanics.

The technical differences tend to disappear over time because they rely more and more on the datapack format, which is shared between the two codebases.


But we are talking about freelancers, not about SEO or content marketing, more like content filling

Most SEO is done by freelancers (at least in my industry). When i talk about content marketing i mean anybody who writes blog posts and LinkedIn posts for companies. It was already shit long before AI arrived.


Yeah I’m not bashing anybody, my wife did that for a couple years I know how it is. There was a kind of golden period where it would even pay enough to let you do some quality stuff but when VC money stopped raining the market slumped almost immediately.


I think the bitter lesson here is that there’s a bunch of jobs where quality has zero importance.

If you take for example, content marketing, SEO, and ad copy writing… It’s a lot of bullshit, and it’s been filling the web with gpt-grade slop for 20 years now. If you can do the same for cheap I don’t see a reason not to.


Even without seeders, you can sometimes be lucky and resurrect old torrents that have been kept in cache by providers such as real debrid


I’ve been getting back into anarchy Minecraft as an old buddy of mine is kinda resurrecting a base I used to be active at.

The scene is mostly dead, on our main server it’s 2 to 4 players on average which is crazy to me. It used to be from 50 to 100 most evenings.

Now I’ve got a 2 million blocks trip to make, even auto walking on the nether roof that’s gonna take some time. But it’s also an occasion to revisit some historic milestones along the way! I was able to get my hands on one signed book a friend had given me some time before passing away so it’s also kind of an emotional journey.