• 0 Posts
  • 75 Comments
Joined 1Y ago
cake
Cake day: Jun 12, 2023

help-circle
rss

Maybe paywalled subreddits are more intended to become competitors to maybe patreon and only fans rather than present day subreddits? Like a lot of patreons have discord access as a perk, the paywalled subreddit could potentially fill that role instead. Don’t think it seems like a good idea and don’t think it’ll become more than a gimic


Nice, I’ll check it out. I’ve been meaning to customize the desktop a bit more but it works well enough for the moment.


There are probably better alternatives, but I have a raspbery pi plugged into my tv and use KDE connect to remote control the mouse and keyboard from my phone. If I wanna watch youtube I’ll navigate to youtube.com and click on a video.


I think that it’s quite bad if Microsoft puts peoples family photos on their servers without the user realizing it. That’s not a niche privacy nerd sentiment, I think that a lot of people would find that creepy. Having the option easily available can be really good for a lot of non-techy people but it should be very clear what stays on your computer and what doesn’t, and how to keep something private if you want to, which I’m not sure that it is if Microsoft quietly backs up Documents, Pictures etc.


I think he was still on the board after he closed his account, him leaving the board might be much more recent



I do love me a good video game video essay, but I think that a more traditional journalistic format has a lot of strengths when it comes to covering small games. It’s probably true that youtube has replaced a lot of traditional journalism but I think that this is overall bad for the video game echo system.


One thing that I think is missing from the equation is good video games journalism that covers indie games. Video game journalism has never been doing amazing but it’s practically dead now.

Tying discovery to the same platform that you consume things on is really bad, because it always gives that distributor way to much power. Similar story with spotify, but journalism about underground music is at least in a slightly better place.


The problem is that when everyone is using their right to deny access to their works to make people give them money, and there is only so much money you can reasonably spend on entertainment and so on per month, people end up abstaining from a lot of things they could otherwise have taken part in for no extra cost.

I think that the things we pirate have a value: music, movies and games have a value because they are cultural products and vulture is important, software like photoshop has a value because it is a useful tool. Putting up barriers to accessing these things means destroying this value. Having a system where the main way to make money of e.g. music is to paywall it has the “destruction” of a lot of value as its outcome. In some ways streaming platforms like spotify are better in this regard but then that means giving the platform a lot of power over music discovery for example. Spotify doesn’t really do a good job of paying its artists either which is its supposed ethical advantage over piracy.


I think that a system where we should abstain from things that are basically free to reproduce (i.e. things you can pirate) is dumb. There are many movies that I probably wouldn’t pay money to but that I’ve pirated. The companies that own the rights to the movie don’t lose any sale they would have otherwise made but I get whatever enjoyment I get from watching the movie at least, so it’s a net win.

When I pay may bills at the end of the month I also put some money towards paying for things that I’ve pirated that I like, usually with a focus on smaller creators. It doesn’t really feel meaningful to pay for a marvel movie for example. It’s not really a perfect system but neither is artificially limiting the access to digital media.


I have a copy that I got from https://github.com/yuzu-mirror/yuzu. Looking at its master branch of the main repo, it has dc94882c9062ab88d3d5de35dcb8731111baaea2, followed by 4 commits related to translation (likely the same as OPs) followed by a couple of commits that only change github urls from yuzu-emu to yuzu-mirror.


even a small amount of change into an LLM it turns out to radically alter the output it returns for huge amounts of seemingly unrelated topics.

Do you mean that small changes radically change the phrasing of answers, but that it has largely the same “knowledge” of the world? Or do you mean that small changes also radically alter what a llm thinks is true or not? If you think the former is true, then these models should still be the same in regards to what they think is true or not, and if you don’t then you think that llms perception of the world is basically arbitrary and in that case we shouldn’t trust them to tell us what’s true at all.


Well if we have a reliable oracle available for a type of questions (i.e. Wolfram Alpha) why use an llm at all instead of just asking the oracle directly


The problem isn’t just that llms can’t say “I don’t know”, it’s also that they don’t know if they know something or not. Confidence intervals can help prevent some low-hanging fruit hallucinations but you can’t eliminate hallucinations entirely since they will also hallucinate about how correct they are about a given topic.


Sometimes I’ll solve a computer problem for someone in an area that I know nothing about by just googling it. After telling them that all I had to do was google the problem and follow the instructions they’ll respond by saying that they wouldn’t know what to google.

Just being experienced at searching the web and having the basic vocabulary to express your problems can get you far in many situations, and a fair bit of people don’t have that.


Multiple cursors are a lot better than :s for you standard search and replace, unless you have a really big file at which point helix gets to slow (which isn’t that common) but there are a lot of other stuff you can do with ex commands.

I use :make pretty often, vim ships with the ability to parse a lot of compiler/linter outputs out of the box so if you tell it which one with :compiler you get build errors in the quickfix list. I also use :grep a lot. You can do <space>/ to grep in helix but I often find that I want to add command line options to only search in specific directories or for specific file types (we have a large codebase at work). Being able to filter results with :Cfilter, and being able to go back to old quickfix results with :colder is also really nice. Finally, you can use :cdo to apply ex commands to stuff you’ve matched in the quickfix list.

As an example, if you get a build error because you’ve renamed a variable in one file but not the places it gets referenced in other files, you can :make to get the build errors in you quickfix list, :Cfilter to narrow it down to only that specific class of error if needed and then do :cdo s/oldName/newName/g to rename the variable in all places that cause errors. You can then go back to the list of all errors with :colder and handle other errors in another way if needed.

I’ll have to admit that I don’t do this that often so honestly I wouldn’t lose out on that much switching to helix (after it gets proper plugin support and someone makes a decent replacement for the fugitive git plugin) but I would feel less powerful not knowing that I have those tools up my sleave lol.


I don’t think helix will ever catch up to a lot of vims lesser know features of which there are a lot. I think that’s by design as well, I think that helix wants to have a smaller surface area than vim and for a lot of people that will be the right choice. I personaly use ex-commands for example, or the quickfixlist fairly often so for me I have a hard time imagining helix not feeling like a step down power-wise (as nice as multiple cursors are).


Download a popular movie and keep your computer on for a while 🤷‍♂️

Although, seeding stuff that isn’t popular is also important. I don’t know what you’re seeding but if no one is leeching maybe there aren’t a whole lot of other people seeding either. When someone does leech, they might be very happy that you’re there keeping that one torrent alive.


I can recommend fd to everyone frustrated with find, it has a much more intuitive interface imo, and it’s also significantly faster.


Pair coding with vim is a skill in itself (for the vim user). You can make things a bit easier to follow by making liberal use of visual mode for example. I have a CoworkerMode command that turns on smooth scrolling via vim-smoothie and cursorline, and I’ve also added some stuff to the neovim right-click menu so that I can explicitly right click go to definition for example. It can be worth switching editor sometimes, but it’s not always worth it if you’re in the middle of something.


You can have qBittorrent running in mixed mode, which doesn’t give you the privacy of i2p but does give you even more leachers than just using normal ip and helps grow the i2p network. Everyone should get i2p and use mixed mode or i2p only imo.


Dark UX is more like features that are intentionally misleading, enchitification is making your product worse in order to be able to make money of it.


One problem that’s particular to node is that you can’t unpublish packages if another package depends on them. As it says in the article, that means that no one can unpublish their packages, including the everyone package since someone apparently depends on that.


Bluesky has the most twitter like user base of all the twitter clones that I’ve tried, and it’s up to you if that’s a good or bad thing. It’s not all segments of twitter though, there isn’t really any of right wing twitter or crypto twitter for example (a lot of furries on the other hand) which is quite nice actually. It isn’t really active or important enough to get a lot of the big drama or main character moments and there aren’t really any celebs, journalists and politicians posting there. So it’s a bit like twitter without many of the lows but also many of the highs.


Dorsey got bullied off bluesky by its userbase so there’s that at least


I don’t think I would use this actually, because I don’t see how an AI could capture the performance. I’m a sub over dub guy anyway, but at least someone making a dub has a sporting chance to make an interesting performance.


Peak dishwasher is a great concept and I think it highlights something important in the way we think of technology. There’s often this underlying assumption of technological progress, but if we look at a particular area (e.g. dishwashers) we can see that after a burst of initial innovation the progress has basically halted. Many things are like this and I would in fact wager that a large portion of technologies that we use haven’t actually meaningfully developed since the 80s. Computers are obviously a massive exception to this - and there are several more - but I think that we tend to overstate the inevitability of technological progress. One day we might even exhaust the well of smaller and faster computers each year and I wonder how we will continue to view technological progress after that.


Algebraic datatypes is a huge part of typed functional programming for me, you should read up on that!



Personally, the language that’s taught me the most to learn has been Haskell. It has a lot of very interesting ideas and a learning curve that plateaus after most other languages. There are several ideas that have trickled down from Haskell to other parts of the programming world and learning about them in the context Haskell is in my opinion better because you’ll learn about them in a context where they fit in with the rest of the language very well instead of being late additions that offer an alternate way of doing things.

Coming from Java and JS, Haskell has a very different approach to a lot of things so you’ll have to re-learn a lot before you get productive in it. This can be frustrating for some but you’ll learn more if you get over that hump on the other hand.

Haskell doesn’t see very much industry use and arguably isn’t very well suited for industrial application (I haven’t used it professionally so I don’t know personally) so it might not directly help you land any new jobs but it is in my opinion it’s a very good way to develop as a programmer.


The article mentions AI. 16gigs feels far too little to run a LLM of respectable size so I wonder what exactly this means? Feels like no one is gonna be happy about a 16gig LLM (high RAM usage and bad AI features)


I might be suffering from stockholms syndrome here, but my prefered ways of working with git are the cli and the fugitive vim plugin which is a fairly thin wrapper around the cli. It does take a middle ground approach on hiding the magic and forcing you to learn the magic which I suppose can be confusing for beginners when you work collaboratory and something happens that forces you to go beyond pull/add/commit/push


I think you know what I mean when I contrast Rust with GC’d languages, we can call it opt-in garbage collection if we’re being pedantic.


If you just Rc everything (which I’d count as “abusing Rc”) Rust is significantly worse than a language with a good GC. The good thing about Rust is that it forces you to aknowledge and consider the lifetimes of objects. By default things are allocated on the stack, but if you make something global or dynamically handled (e.g. through Rc) you have to do so explicitly. In Rust the compiler has greater compile time information about when things can be freed which means that you need less runtime overhead to check things and if you want to minimize the amount of potentially long-lived objects you can more easily see how long objects might live by reading the code as well as get help by the compiler to determine if a lifetime-based refactoring is sound or not.


Haskell. I think that more people being familliar with Haskell concepts would be good for programing culture and it would increase the odds of me being able to write Haskell professionally, which is something I enjoy a lot when writing hobby code at least. Having more access to tooling and a bigger eco system would be nice as well.

I’m not a 100% sure about my answer though. For one, I might grow to resent Haskell if I had to use it at work, and there’s also a risk that it would be harder to do cool innovative stuff with the language when more big companies depend on it.


You are absolutely correct that rusts safety features don’t extend to memory leaks, but it’s still better than most garbage collected languages unless you abuse Rc or something, and it does give you quite fine-grained controll over lifetimes, copying and allocations on the heap which in practice means that rust is fairly good about memory leakages compared to most languages.


I don’t think boolean logic is a necessary prerequisite for coding, if you don’t know it yet it makes more sense to learn about it when you come across a programing problem where you’ll want to use it imo


Many “AI generated” images are actually very close to individual images from their training data so it’s debatable how much difference there is between looking at a generated image and just looking at an image from its training data in some cases at least.


It’s not that much of a strain since it only handles DNS traffic.

When you go to e.g. programming.dev, you computer needs to know the actual IP and not just domain name so it asks a DNS server and recieves an answer like 172.67.137.159 for example. The pihole will just route the traffic to a real DNS server if it’s a normal website or give a unkown ip kind of answer if it’s a blacklisted domain. Actually transmitting the website which is the bulk of trafic is handled without the piholes involvement.


I can see how PS might be better for writing actual programs in but the wordiness really gets in the way when youre just trying to write something on the command line so it feels poorly optimized for cli usage. Bash is very poorly optimized for writing programs otoh.