• 4 Posts
  • 108 Comments
Joined 1Y ago
cake
Cake day: Jun 16, 2023

help-circle
rss

Attackers need to access the system kernel to exploit the Sinkclose vulnerability, so the system would have to already be compromised. The hack itself is a sophisticated vector that is usually only used by state-sponsored hackers, so most casual users should take that into account.

So it’s a vulnerability that requires you to.already have been compromised. Hardly seems like news.

I can understand AMD only patching server chips that by definition will be under greater threat. On the other hand it’s probably not worth the bad publicity not to fix more.


I moved from an FX8350 to a R5 5600G a few years ago, having run it for about 9 years. Initially I didn’t think I’d notice much difference, but frankly it’s an entirely different ballgame.


At this point if you use Chrome I think there is something wrong with you.


“Already stable enough”

  1. no it isn’t.
  2. if fucking should be, it’s been around 15 years!

It started with Emby and pihole. I’m now up to about 30 different services from Vault, email, 3CX, home assistant, firefox, podgrab etc.


I just setup netboot.xyz this evening as an experiment. Is pretty cool.


Yes you can do that. I do with opnsense. The username and passwd are not obvious though - they’re probably not what you use to login to the ISP portal with.

Most ISPs will have a brief FAQ on how to use third party equipment with the basics of what settings are important for your connection. You just need to enter them in to pfsense correctly. Also, sometimes searching for “<ISP_name> pfsense” can find useful blogs and articles.


It’d be nice if email clients automatically checked for public keys for any email you enter in the To fields. With a nice prompt that keys have been found to Encrypt the message with. It doesnt sound too difficult and it could lead to much wider adoption of secure emails.

Unfortunately most people get their email free because companies like reading it and stopping that means it might become a paid for service. Something I’m happy to pay for, but many wouldn’t be.


You can download the public key from the web interface. I then imported it in to gpg with a gpg --import public.asc and then used the above commands to generate the WKD structure.


No worries, I thought it was pretty interesting and I’d never heard of it before so thought I’d share.

The most difficult part for me was configuring nginx to properly serve the files. The gpg part was actually the easy bit.


There’s 2 methods, one uses a subdomain and one doesn’t. Without is called ‘direct’. No special DNS entries required really. I have a wildcard subdomain entry which works for me. Just so long as the key is available over HTTPS using one method.


PGP key discovery for Email - WKD
I've run my own email server for a few years now without too many troubles. I also pay for a ProtonMail account that's been very good. But I've always struggled with PGP keys for encrypting messages to non-Proton users - basically everyone. The PGP key distribution setup just seemed half baked and a bit broken relying on central key servers. Then I noticed that email I set from my personal email to my company provided email were being encrypted even though I wasn't doing anything to achieve this. This got me curious as to why that was happening which lead me to WKD (Web Key Directory). It's such a simple idea for providing discoverable downloads for public keys and it works really well having set it up for my own emails now. It's basically a way of discovering the public key of someone's email by making it available over HTTPS at an address that can be calculated based on the email address itself. So if your email is `name@example.com`, then the public key can be hosted at (in this case) `https://openpgpkey.example.com/.well-known/openpgpkey/example.com/hu/pmw31ijkbwshwfgsfaihtp5r4p55dzmc?l=name` this is derived using a command like `gpg-wks-client --print-wkd-url name@example.com`. You just need an email client that can do this and find the key for you automatically. And when setting up your own server you generate the content using the keys in your gpg key ring using `env GNUPGHOME=$(mktemp -d) gpg --locate-keys --auto-key-locate clear,wkd,nodefault name@example.com`. Move this generated folder structure to your webserver and you're basically good to go. I have this working with Thunderbird, which now prompts me to do the discoverability step when I enter an email that doesn't have an associated key. On Android, I've found OpenKeyChain can also do a search based just on the email address that apps like K9-Mail (to be Thunderbird mail) can then use. Anyway, I thought this was pretty cool and was excited to see such an improvement in seamless encryption integration. It'd be nicer if on Thunderbird and K9 it all happened as soon as you enter an email address rather than a few extra steps to jump through to perform the search and confirm the keys. But it's a major improvement. Does your email provider have WKD setup and working or do you use it already?
fedilink

I’ve been using it for a few years. Really handy way if avoiding cooperate firewall rules.


How’d you set that up with Opnsense fail over? I have an opnsense VM with input straight from the ISPs FTTP box to the NIC on my server. So I can’t fail over to my second proxmox box without swapping the cable over.


Run your own DNS server on your network, such as Unbound or pihole. Setup the overrides so that domain.example.lan resolves to a local IP. Set your upstream DNS to something like 1.1.1.1 to resolve everything else. Set your DHCP to give out the IP of the DNS server so clients will use it

You don’t need to add block lists if you don’t want.

You can also run a reverse proxy on your lan and configure your DNS so that service1.example.lan and service2.example.lan both point to the same IP. The reverse proxy then redirects the request based on the requested domain name, whether that’s on a separate server or on the same server on a different port.


I don’t understand it either. On one hand people say don’t remember addresses, use DNS and on the other DNS relies on static addresses but then every device is “supposed” to have random addresses via SLAAC or privacy addresses. It just doesn’t seem to tie together very well, but if you use them like IPv4 addresses you’re apparently doing it wrong.


RAID IS NOT BACKUP RAID IS NOT BACKUP RAID IS NOT BACKUP


Don’t use Red drives for a NAS!! You need the Red Plus (or is it red pro) disks as they’re CMR.

I’d go for Ultrastar drives personally. There’s a few really good videos online analyzing the backblaze stats for different drives that are well worth watching.


I received so much spam and abuse of my network from .xyz domains that they are fully blocked in every conceivable way from being accessed or accessing my network.


Pretty sure my Seagate usb disks I use for backup are SMR and sustained writes are awfully slow. Luckily I’ve discovered restic for backing up which lowered a 1.5tb weekly incremental backup from 9hrs to 1 min.


I highly recommend watching this guys videos on his analysis of the backblaze data https://www.youtube.com/watch?v=IgJ6YolLxYE&t=1

And a comparison of the difference WD drive colours, which might not be what you expect https://www.youtube.com/watch?v=QDyqNry_mDo&t=2


And why I no longer run NC. Every time it would fuck itself to death and I’d have to start from scratch again.


Ahh yes, the first time it is defined is in the conclusion after being used 25 times previously in the article.




I’ve been running 3CX for a couple of years with a Voicehost trunk configured. I found it much simpler than free PBX to setup and maintenance has been a breeze. There’s apps or a web based option too. 3CX can be a little picky with older unsupported hardware - the old Cisco phone I bought was a tricky setup, but the Yealink I have phone was plug and play easy.

The tricky bit was configuring the opnsense router and firewall to correctly handle all the ports properly, but I think that’d be the same for any solution and for an internal only option probably not required.


I mean it happened to be mail, but it could have been any service on a server without enough resources. Just bad luck for me this time.

Setting up the mail server was a bit of a pain, but so was setting up a lemmy server. For 6 years it really has been plain sailing. So I was due a change in fortune, I guess


Spent 7 hours trying to fix my iredmail server
I noticed that I wasn't getting many mails (I need better monitoring), and discovered that my iredmail server was poorly. I have spent far too much time and energy on getting it back and working these past few days, but I've finally got it back up and stable. Some background: I've had iredmail running for probably going on 6 years now and have had very few issues at all. It runs on an Ubuntu VM on Proxmox and originally was running in the same VM on ESXi (I migrated it over). I haven't changed anything to do with the VM for years other than the Ubuntu LTS updates every 2-3 years, it's always been there and stable. I occasionally will update the Ubuntu OS and iredmail itself, no problems. Back to the problem... I noticed that Postfix was running OK, but was showing a bunch of errors about clamav not being able to connect. Odd. I then noticed that amavis was not running and had seemed to just die. I couldn't find any reason in any log file. Very strange. Bunch of hunting, checking config file history in the git repo. Nothing significant for years. Find that restarting the server got everything back up and running. Great, lets go to bed.... Wake up next morning to find that amavis was dead again - it only lasted about 40 mins and then just closed for no reason. Right, ok, time to turn off clamAV as that seemed be be coming up a bit wheilst looking, follow the guide, all is well. Hmm, this seems to be working, but I don't really want clamav off. A whole bunch of duck duck going and I still couldn't figure out a root cause. And then it clicked, the thing that was causing amavis to close was that it was running out of memory and it was being killed. Bump the memory up to 4GB and re-enable everything as it originally was and.... it seems to have worked. Been going strong for over a day now. I don't know what it was that's changed recently which has meant the memory requirements have gone up a bit, but at least it's now fixed and it took all of 2 minutes to adjust. The joys of selfhosting!
fedilink

…until they change their prices. Always make sure you have a local copy and a way out


NFS:Heat. Picked it up for only a couple of quid ages ago but have recently started to enjoy it. When i first got it it didn’t run all that well on Proton, but now it is silky smooth. There’s a real sense of speed as the camera moves about. I’m just about getting the hang of the drift mechanic and making some good progress. It’s quite a lot of fun really.


Exactly. You’re connecting this to a PC of some sort, right? So why not just put the disks in the case? The OCW on uk amazon is £440, which isn’t far off what I spent on my server build.


That is very expensive. Why not just get a case that’ll fit 4 drives and a HBA in IT mode for a quarter the price?


I’ve been very happy with Opnsense running as a VM on both ESXi, and now Proxmox. Lots of configuration options and able to setup some complicated firewall rules easily.


Just had a thought. It was wildcard subdomain I couldn’t do with namecheap. Things like *.domain.tld




Maybe its different now, but it didn’t used to be possible to do that.


Namecheap because they’ve lived up to their name. The DNS for my domains is all on Cloudflare though as I can automate my letsencrypt renewal that way that I couldn’t on plain old namecheap.


It wouldn’t matter to them really. Just look at how many people have gmail accounts.

They don’t even have to send the whole messages back to base. They could be categorizing your messages in to themes and sending that back to base as small category flags. Use that to build a profile on you and use those for advertising to you.

You mention something on the theme of ‘broken boiler’ in a message, that gets analyzed on the client in to a category of ‘interest in heating / boiler repair’, plus some adjacent categories based on your demographic. The categorization gets sent back and the next website you visit has an ad for British Gas boiler repair.


Well you type messages in in plain text and they decrypt it to show you the messages at the other end. So they can do the nefarious processing on the client side and send back results to the mother ship. E2EE is only good when you trust the two ends, but with WhatsApp and Messenger you shouldn’t trust the ends.


I probably have more accessible from outside than not. Many are required: hosting a website, a media server I can access from anywhere outside the house, my phone system, etc. Some I used to use more than I do now: podcast service, that sort of thing. Then a bunch that are internal only. My phone connects home over Wireguard so that’s pretty convenient when out and about for accessing internal only systems.


As soon as you have a requirement for large reliable storage then you’re on to at least the small desktop arena with a few HDD at which point it’s more efficient to just have the small pc and ditch the RPI.


Upgraded Proxmox 7 to 8
This was a very nerve racking experience as I'd never gone through a major version Proxmox update before and I had spent a lot of time getting everything just so with lots of config around disk and VLANs. The instructions were also a big long page, which never fills me with confidence as it normally means there's a lot of holes to fall in to. My initial issue was that it says to perform the upgrade with no VM's running, but it requires an internet connection and my router is Opnsense in a VM. Thankfully `apt dist-upgrade --download-only`, shutdown the Opnsense VM and then `apt dist-upgrade` did the trick. A few config files changed and I always hate this part of Debian upgrades, but nothing major or of importance was impacted. A nervous reboot and everything was back up running the new Proxmox with the new kernel. Surprisingly smooth overall and the most time consuming part by far was backing up my VM's just in case. The upgrade itself including reboot was probably 15 mins, the backups and making sure I was prepared and mentally ready was about an hour. Compared to upgrading ESXi on old hardware like I was doing last year, it was a breeze. Highly recommended, would upgrade again.
fedilink

I set up friendica as my first foray on to the fediverse. It worked well, but as it turns out doesn't work that well with Lemmy, which was my main usecase. Well whilst trying to fix DNS issues setting up a Lemmy instance instead, I noticed my DNS logs were rather full. My Unbound DNS was getting 40k requests every 10 mins to *.activitypub-troll.cf. I don't know who or what that is, but blocking it didn't reduce the activity. At first I thought it was something to do with Lemmy as I'd forgotten I still had Friendica running. Thankfully stopping the Friendica service reduced the DNS request back to normal. So if you've set something up recently, you might want to check if there have been any consequences in your service logs
fedilink