Developer, 11 year reddit refugee

Zetaphor

  • 4 Posts
  • 71 Comments
Joined 1Y ago
cake
Cake day: Jul 19, 2023

help-circle
rss

Yes I do! It’s a pretty great overview that isn’t extremely math heavy

The book is “Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD”

https://www.amazon.com/gp/product/1492045527


I have a book on learning Pytorch, this XKCD is in the first chapter and implementing this is the first code practice. It’s amazing how things progress.


I’m really enjoying Otterwiki. Everything is saved as markdown, attachments are next to the markdown files in a folder, and version control is integrated with a git repo. Everything lives in a directory and the application runs from a docker container.

It’s the perfect amount of simplicity and is really just a UI on top of fully portable standard tech.


I completely gave up torrents for Usenet, also using the -arr’s to get content for Plex. I completely saturate my bandwidth with Usenet downloads and I’ve never once received an ISP letter, and I’ve been entirely without a VPN.


As someone who completely gave up torrenting for usenet, what made you decide against usenet?


To elaborate further from the other comment, it’s a person running a copy of the Lemmy software on their server. I for example am running mine (and seeing this thread) from https://zemmy.cc. Thanks to Federation all of our different servers are able to talk to each other so we can have a shared experience rather than everyone being on one centralized instance managed by one set of administrators (like reddit is).

This provides resilience to the network. If reddit goes down, reddit is down. If lemmy.world goes down, you can still access the content of every community that isn’t on lemmy.world, and if other servers were subscribed to the content on a community from lemmy.world you could still see the content from before the server went offline (and it will resync once it’s back up).

If we put all of our eggs into a single basket, we have a single point of failure. If all of the major communities go to lemmy.world then lemmy.world is that single point of failure. Doing that is effectively just recreating the same issues we had with reddit but with extra steps. By spreading larger communities across servers we ensure that the outage (or permanent closure) of a single instance doesn’t take down half the active communities with it.



Putting all of the large communities on a single instance is just reddit with more steps. It’s good that one of the larger Lemmy communities is not also on the largest Lemmy instance. Lemmy.world suffers a lot of outages (in part because it’s so centralized), meanwhile this community remains available.



I think it’s a bit silly to have megathreads just because some users can’t scroll past posts that doesnt interest them.

The problem is there are so goddamn many, to the extent that I’m working on a userscript that lets me entire hide posts that contain keywords. Checking my frontpage using Subscribed/Active, 5 of the first 20 posts are about this “news”. And that’s a full day after it happened, yesterday was far worse

Edit: The userscript is ready!


Static pages with hyperlinks have evolved into a certain horror we all know.

Why couldn’t this just be a webring of sites following a specific design philosophy?

This is a neat idea, but the requirement of installing a whole new piece of software just to decide if it’s worth exploring is already a non-starter.


Sure you could make the argument that HTML has too much going on, but you don’t have to use all of that. It is still at its core just as capable of rendering plaintext and hyperlinks as it was the day it was originally conceived.

Why couldn’t this just be a webring of sites that are following a specific design philosophy. I don’t understand the requirement of an entirely new language, protocol, and client. You’re not executing the goal in any way than what is already possible, and you’re cutting yourself off from being accessible by the vast majority of people by requiring them to install a whole new piece of software just to see if this idea is worth exploring.


How is this website so wrong?

I don’t have a static IP. I’m not in the United States. I’m not even in North America…

I’m literally on another continent which can be very easily verified using nothing more than a geoIP lookup, but they somehow place me somewhere 3,000+ miles away. And no, I’m not using a VPN.


This is neat, but this decidedly a niche product with very limited application. I’m an old hat and I can’t see the inherent value proposition in this, why is this better than static pages with hyperlinks? That doesn’t and frankly shouldn’t require a whole new protocol and client. That’s what HTTP and HTML were originally built for.



They keep updating the list every week even if you’re not listening. Also I’ve used their service for years so they have me pretty well figured out.


I’ve honestly never been able to discern a difference in quality between 96kbps and 320, even while wearing my ATHM50’s, maybe my hearing just sucks. 🤷‍♂️

Azuracast let’s me stream up to 320, but I choose to use 96 to save bandwidth.


Unfortunately no, but any client that supports the subsonic api will work



It’s good enough for my purposes, which mostly involve streaming it over 96kbps for playback on wireless headphones. It’s a small price to pay for the convenience of the automation.


Navidrome natively supports scrobbling. I also scrobble from Clementine on my desktop.

I’m downloading individual tracks much more than I’m downloading entire albums.


That was my mistake, and I’ll edit the post. I just verified that you are correct by checking a random subset of the MP3’s I have. I clearly got wrong information from somewhere.



Spotify pretty much has them down from my years of use. Even if you’re not coming back and listening regularly it will still update that playlist every week.

LastFM is getting my actual up to date listening habits as I use their scrobbling service with my music clients, including Navidrome.


cross-posted from: https://zemmy.cc/post/25499 > You may have seen [my previous post over here](https://lemmy.dbzer0.com/post/770169), after I had just gotten everything setup initially. > > I've now expanded this with an additional script, [a github repo](https://github.com/Zetaphor/personal-auto-radio), and proper documentation. > > Here's a cleaner explanation: > > I've taken on the challenge of self-hosting more of the services I regularly depend on. The latest target is Spotify. This meant I needed a simple and convenient way to listen to my music from anywhere, get new music into my collection, and also still receive recommendations based on my interests and listening habits. > > I now have what I think is the pretty ideal setup, here's what it includes: > > * A 24/7 radio station that plays my entire catalog ([link here if you're interested](radio.zetaphor.com/)). This is powered by [Azuracast](https://www.azuracast.com/) along with the scripts in the repo. The station link is using the Public Pages feature in Azuracast with a bunch of custom CSS. > > * A Spotify-like experience that also supports mobile and offline. This is powered by [Navidrome](https://www.navidrome.org/) for web/desktop and [Substreamer](https://substreamerapp.com/) for mobile. Substreamer connects to Navidrome using the Subsonic API. > > * A couple of scripts that allow me to easily download tracks/albums/playlists from Spotify and Youtube. I used these to bootstrap the collection and export my existing playlists from each service. > > * A couple of scripts that automatically grab my latest recommendations from Spotify and LastFM, add them into Navidrome, and provide me a nearly fully automated way to parse out tracks I want to keep permanently. > > That last point is the most interesting part in my opinion. Both scripts run on a weekly cron job that downloads my Discover Weekly playlist from spotify, and current recommendations from LastFM. It then creates a playlist for each source for that weeks collection and moves it into Navidrome. > > I then browse that weeks playlist at my leisure, using the "star" feature in Navidrome to decide what to keep. Once I'm done I run another script manually that takes all of the starred tracks from those two playlists and moves them into my catalog, and then deletes the remaining tracks and the playlists. > > This means I just need to go through and listen to recommendations and click a button on what to keep, and the rest is discarded automatically. It really doesn't get any simpler than this! > > What remains will then be available for on-demand playback through Navidrome and also added to the full catalog that powers the 24/7 radio station. > > **FAQs from the last thread** > > **What is being used to download from X?** - `spotdl` is being used for Spotify.`pytube` is being used for LastFM and Youtube. spotdl is also just downloading tracks from Youtube under the hood. > > **What is the audio quality of the downloaded tracks?** - Since these are coming from Youtube, everything is a 128kbps VBR Opus codec. It's certainly not FLAC but it's good enough for my enjoyment.
fedilink

You may have seen [my previous post over here](https://lemmy.dbzer0.com/post/770169), after I had just gotten everything setup initially. I've now expanded this with an additional script, [a github repo](https://github.com/Zetaphor/personal-auto-radio), and proper documentation. Here's a cleaner explanation: I've taken on the challenge of self-hosting more of the services I regularly depend on. The latest target is Spotify. This meant I needed a simple and convenient way to listen to my music from anywhere, get new music into my collection, and also still receive recommendations based on my interests and listening habits. I now have what I think is the pretty ideal setup, here's what it includes: * A 24/7 radio station that plays my entire catalog ([link here if you're interested](radio.zetaphor.com/)). This is powered by [Azuracast](https://www.azuracast.com/) along with the scripts in the repo. The station link is using the Public Pages feature in Azuracast with a bunch of custom CSS. * A Spotify-like experience that also supports mobile and offline. This is powered by [Navidrome](https://www.navidrome.org/) for web/desktop and [Substreamer](https://substreamerapp.com/) for mobile. Substreamer connects to Navidrome using the Subsonic API. * A couple of scripts that allow me to easily download tracks/albums/playlists from Spotify and Youtube. I used these to bootstrap the collection and export my existing playlists from each service. * A couple of scripts that automatically grab my latest recommendations from Spotify and LastFM, add them into Navidrome, and provide me a nearly fully automated way to parse out tracks I want to keep permanently. That last point is the most interesting part in my opinion. Both scripts run on a weekly cron job that downloads my Discover Weekly playlist from spotify, and current recommendations from LastFM. It then creates a playlist for each source for that weeks collection and moves it into Navidrome. I then browse that weeks playlist at my leisure, using the "star" feature in Navidrome to decide what to keep. Once I'm done I run another script manually that takes all of the starred tracks from those two playlists and moves them into my catalog, and then deletes the remaining tracks and the playlists. This means I just need to go through and listen to recommendations and click a button on what to keep, and the rest is discarded automatically. It really doesn't get any simpler than this! What remains will then be available for on-demand playback through Navidrome and also added to the full catalog that powers the 24/7 radio station. **FAQs from the last thread** **What is being used to download from X?** - `spotdl` is being used for Spotify.`pytube` is being used for LastFM and Youtube. spotdl is also just downloading tracks from Youtube under the hood. **What is the audio quality of the downloaded tracks?** - Since these are coming from Youtube, everything is a 128kbps VBR Opus codec. It's certainly not FLAC but it's good enough for my enjoyment.
fedilink


No for-profit is nice, but they are the lesser shit of the two choices we have. Remember that the Mozilla Corporation is a for-profit, the Mozilla Organization is a non-profit. There is a clear conflict of interest between those two entities.

I do and will continue to use their browser because it’s the only choice I have if I want to stand by my principle of supporting a free and open web.


What are the issues I have with Mozilla? They’re floundering with little direction and seemingly incompetent management.

They laid off a bunch of their key engineers while they continue to increase the CEO’s compensation. They keep making half baked decisions with regards to features and marketing that don’t seem conducive to their core offering, like the Pocket integration. They completely killed PWA integration, that only works now with an extension and third party software. They retired BrowserID. They orphaned Thunderbird. There’s probably more I’m forgetting.


This is where ChatGPT and Codium.ai has been a godsend for me. Something that would have taken me a few hours to 1+ days to iterate on is now reduced down to anywhere from minutes to an hour. I don’t even always see it all the way through to completion, but just knowing that I can iterate on some version of it so quickly is often motivation enough to get started.

If you’re paying for the Plus subscription, GPT-4 with Code Interpreter is absolutely OP. Did you know you can hand it a zip file as a way of giving it multiple files at once?


But now you have the opportunity to build it in Rust or Typescript! /s


We’re the minority, if this gets implemented it’s endgame. Try convincing the billions of people who already don’t care enough to use Firefox to protect their privacy to now stop using Chrome because it’s killing the open web. Now tell them to stop using services they care about because DRM is bad.

At this point our only real hope is the EU decides to forcibly stop this, but I’m not holding my breath.


You’re saying this like Firefox is adding the shitty standard because they want to, and not because Google used their monopoly to force adoption of the shitty standard forcing Firefox to follow suit if they don’t want their users to have a broken experience.

If Google introduces a shitty standard to YouTube and Firefox doesn’t adopt it, do you honestly think users are going to care or understand and blame Google? No, they’ll get pissed because they think Firefox broke YouTube and they’ll move to Chrome.

This exact situation played out with shadow DOM, Google implemented it into YouTube while it was still a draft standard, so all non-Chrome browsers ran worse because they had to use a polyfill.

That is why we’re telling people to stop using Chromium. If they didn’t have this monopoly none of this would be possible. Mozilla has some issues as an organization, but do honestly you think the better choice is letting an advertising company decide how the web works?


That entirely depends on the employer, but in my anecdotal experience that has been the case. Especially in more recent years versus the start of my career (nearly 20 years ago).

The reality is that Computer Science is useful for building strong engineers over the long-term, but it doesn’t at all prepare you for the reality of working in a team environment and contributing code to a living project. They don’t even teach you git as far as I’m aware.

Contributing to open source demonstrates a lot of the real-world skills that are required in a workplace, beyond just having the comprehension and skill in the language/tool of choice you’re interviewing for.


Looks like it wasn’t just you, a bunch of large instances just had an outage


Just a heads up, you replied multiple times to this. If the client you’re using doesn’t submit immediately, that just means it’s not doing error handling properly and not disabling submit buttons while the request is in flight. You’ve actually submitted once for each time you pressed the button


Build an open source portfolio. Being able to show employers what I was capable of was a massive benefit both then and now. You can say you know all of these things, but when you’re looking at hundreds of applications one of the first things they do to reduce the pile is filter out people who don’t have some kind of online presence like Github. This allows them to see that you’re actively engaged with the field and if they want to interview you, to look at your code quality and experience.

A personal website that highlights your best work is also a good idea, as it helps to even further distill down the things you’re ultimately going to end up talking about in an interview. It doesn’t need to be anything fancy, just something that shows your competent. I wouldn’t expect the person interviewing you to actually hit view source and criticize your choice in frontend framework.


I was interviewed with complex logic problems and a rigorous testing of my domain knowledge.

Most of what I do is updating copy and images.


This is also just the reality of the job market, especially in this industry. Dev positions get hundreds if not thousands of applications which all vary widely in quality.

I have 20 years of experience and a six figure salary, the last time I went looking for work and was putting out applications I sent out easily over 100 applications and only had 4 interviews. I’ve found it’s best to form a relationship with a competent recruiter, and work with them anytime you’re back on the market. They’re incentivized to find you a decent position so that they can make their commission. Of course finding one that is decent is almost as hard as the process of sending out applications, but once you do it’s a relationship worth maintaining.


I’ve never been to college and my job title today is Software Architect, I’ve been doing this for nearly 20 years.

It was extremely hard at first to get a job because everyone wanted a BA, but that was also 20 years ago. Once I had some experience and could clearly demonstrate my capabilities they were more open to hiring me. The thing a degree shows is that you have some level of experience and commitment, but the reality is a BA in CompSci doesn’t actually prepare you for the reality of 99% of software development.

I think most companies these days have come to realize this. Unless you’re trying to apply to one of the FANG corps (or whatever the acronym is now) you’ll be just fine if you have a decent portfolio and can demonstrate an understanding of the fundamentals.


I certainly experienced this at the start of my career. Everyone wanted me to have at least bachelors degree despite the fact that I was able to run circles around fresh college graduates. It wasn’t until someone gave me a chance and I had real world experience that people stopped asking me about my college education. In fact later into my career when they learn about the level of experience I have and that I’m entirely self-taught, it’s often seen as something positive. It’s a shitty catch-22



I’ve just created my perfect automated music setup, including getting new recommendations
I recently decided to start taking on the challenge of selfhosting and curating my music collection. I originally started looking at Lidarr as I am already a big fan of Radarr and Sonarr, but it wasn't really what I was looking for. I'm not often seeking out full albums, and am more often finding my music by listening to single tracks from Spotify's Discover Weekly playlist. I needed a solution that would let me replicate this experience while hosting my own MP3's and ideally be entirely automated. I currently have the following setup running on a VPS: * Azuracast - This provides me a streaming radio station that cycles through my entire library 24/7 * Navidrome - This fills the gap of the Spotify-like interface where I can play specific tracks, albums, or playlists I bootstrapped my library with a Python script that parsed a list of Spotify URL's and downloaded all of the tracks with the spotdl library. This allowed me to grab my liked tracks, the playlists I had created, as well as a large number of albums I wanted. I then used ChatGPT to write two python scripts: * The first script runs using cron every Monday and uses SpotDL to grab the contents of my Discover Weekly playlist from Spotify. It puts all of the files into a folder with that weeks date and also creates a playlist file. This way I can easily browse that weeks playlist in Navidrome and decide what to keep. It also sends me an email on completion/error * The second script is a bit more complex. This one does the same end result but for all of my LastFM reccomendations. This is done by spinning up a headless Chrome browser with Selenium in a docker container. It then logs into my LastFM account, parses each reccomendation, and then uses pytube to download the video links, since LastFM just directly links to Youtube videos. This list should change as I continue scrobbling via Navidrome and other sources, but I still need to determine how often the cron job should run. My next step is figuring out how to connect to Azuracast/Navidrome using the many subsonic compatible clients so I can have mobile playback and things like offline playback. I'm currently looking at substreamer for Android. I'd also like to look into a more seamless way of picking out the tracks I want to keep and discard from the playlists in Navidrome. I'm considering writing something to check its SQL database for liked tracks in each playlist and automatically move those into the main folder/playlist that Azuracast is playing from. This whole setup took me only a couple days to create, and largely relied on ChatGPT to write the scripts and dockerfiles. I'm a capable programmer but GPT-4 is absolutely OP if you know what you're trying to accomplish and how to debug its mistakes. That Selenium script only took me an hour from idea to completion and I never modified the code by hand, only prompted it for corrections/additions. If anyone is interested [I've uploaded all the scripts to a gist](https://gist.github.com/Zetaphor/82cd8fff2d18da7b6e8fae3a074a7f8e), you just need to go through and update with your credentials/URLs
fedilink

cross-posted from: https://zemmy.cc/post/79525 > I currently have 4 different Android clients for Lemmy installed on my phone and none of them are what I'm looking for. Additionally I've tried 3 different PWA's and they're still not what I want out of a browsing experience. > > So I've decided if nobody else is going to make what I'm looking for I'll have to do it myself. This is an early preview of the current unnamed client I'm working on. > > It will be a PWA supporting Android and iOS, though I don't own any Apple products so support will be in so far as they don't do dumb stuff to break PWA standards. [It's open source](https://github.com/Zetaphor/lemmy-loops/) and will be free to use. > > Currently the dev environment is hardcoded to my personal instance as CORS support is restricted in the Lemmy server until a future release. This means all PWA's are actually proxying your requests through their server in order to rewrite the origin header. I don't intend to release this until CORS support is fully resolved which should be soon. > > ~~I need help with a name! I was considering Infinity since I'm using that for the loading symbol, but there's already a reddit client with that name and I don't want to poach it if they decided to transition to Lemmy.~~ I've decided on Loops for the name!
fedilink