• 0 Posts
  • 29 Comments
Joined 1Y ago
cake
Cake day: Aug 06, 2023

help-circle
rss

Can confirm, gitlab has a container registry built in, at least in the omnibus package installation.


Late stage capitalism is when everything you think of makes you anxious, because you have so little control.


I think you can use grafana to present vidgets from different dashboards in one.


Oh, OP got me fooled, I thought this is original xkcd, well done on photoshop.


I use a 2016 Asus Zenbook with integrated intel gpu.

The performance is comparable. The only thing that’s different is latency, obviously, although it’s fairly negligible on LAN, and encoding/decoding sometimes createa artifacts and smudges, but it’s better at higher bandwidth.


My box sits in my closet, so can’t really help much with docker or vm. But I use sunshine server with moonlight client. Keep in mind you can’t fight latency that comes from distance between server and client. I can use 4/5G for turn based or active pause games but wouldn’t try anything real time. On cable my ping is under ms, enough to play shooters as badly as I do these days.

I use AMD for CPU and GPU, and wouldn’t try nvidia if using Linux as sever.

I did use to run a VM in xenserver/xcp-ng and passthrough gpu with a mock hdmi screen plug. A windows 10 vm, ran very well bar pretty crap CPU but I did get around 30fps in 1080p tarkov, sometimes more with amd upscalling. Back then I was using parsec, but found sunshine and moonlight works better for me.

I should also mention I never tried to support multiple users. You can probably play “local” multiplayer with both parsec and moonlight, but any setup that shares one GPU will require some vgpu proprietary fuckery, so easiest is to buy a PC with multiple gpus and assign one to each VM directly.



I think this lead me on the right path: https://community.ui.com/questions/Having-trouble-allowing-WOL-fowarding/5fa05081-125f-402b-a20c-ef1080e288d8#answer/5653fc4f-4d3a-4061-866c-f4c20f10d9b9

This is for edgerouter, which is what I use, but I suppose opensense can do this just as well.

Keep in mind, don’t use 1.1.1.1 for your forwarding address, use one in your LAN range, just outside of DHCP because this type of static routing will mess up a connection to anything actually on this IP.

This is how it looks in my edge os config:

protocols {
  static {
    arp 10.0.40.114 {
      hwaddr ff:ff:ff:ff:ff:ff
    }
  }
}

10.0.40.114 is the address I use to forward WoL broadcast to.

Then I use an app called Wake On Lan on Android and set it up like this: Hostname/IP/Broadcast address: 10.0.40.114 Device IP: [actual IP I want to wake up on the same VLAN/physical network] WOL Port: 9

This works fine if you’re using the router as the gateway for both VPN and LAN, but it will get messy with masquarade and NAT - then you have to use port forwarding I guess, and it should work from WAN.

I just wanted it to be over VPN to limit my exposure (even if WoL packets aren’t especially scary).


There is a trick you could do to send a WoL packet to a separate IP on the sender network and modify it so it is repreated on the network of the machine you want to wake up.

I can’t find docs on thisb on mobile, but can look for it later.

It can’t work like a typical IP packet routing tho. I’ve only made it work with a VPN connection.

Another thing you can do is ssh to your router and send a WoL packet from there on the machine’s LAN.


It’s generic advice, but check kompose - it can translate docker compose yml into a bunch of k8s objects, as far as it sensibly can.

The mose issues can come from setting up volumes, since docker has different expectations towards the underlying filesystem.

It does save a bunch of work of rewriting everything by hand.


If you don’t need external calls sip trunk is not needed.


Big companies? Try any company.


Yeah winget has some issues, but I regularly use it to just update everything it can recognize.


In a hobby it’s easy to get carried away into doing things according to “best practices” when it’s not really the point.

I’ve done a lot of redundant boilerplate stuff in my homelab, and I justify it by “learnding”. It’s mostly perfectionism I don’t have time and energy for anymore.


If you’re the only user and just want it working without much fuss, use a single db instance and forget about it. Less to maintain leads to better maintenance, if performance isn’t more important.

It’s fairly straightforward to migrate a db to a new postgres instance, so you’re not shooting yourself in a future foot if you change your mind.

Use PGTune to get as much as you can out of it and adjust if circumstances change.


Macheads don’t mention other platforms, because why would you use anything else?


I had budget to try xeon d soc motherboard for a smal itx case. Put 64gb ecc ram into it but could hold 128gb. That server will be 8 yo this year. That particular supermicro mb was ment for some oem routerlike 64_86x with 10g ports and remote management. I’m not sure if intel or amd have any cpus in that segment anymore, but it’s very light on wattage if mostly idle/maintaining vms.

One option I’m looking at is to get a dedicated hetzner server, even the auction and lowest grade ‘new’ offerings are pretty good for the price if you account for energy costs and upfront gear cost.


I think it depends. In my limited experience, because I have not tested this thoroughly, most systems pick the first DNS adresses and only send requests to the second if first doesn’t respond.

This has lead at least a couple of times to extremely long timeouts making me think the system is unresponsive, especially with things like kerberos ssh login and such.

I personally set up my DHCP to provide pihole as primary, and my off site IPA master as secondary (so I still have internal split brain DNS working in case the entire VM host goes down).

Now I kinda want to test if that offsite DNS gets any requests in normal use. Maybe would explain some ad leaks on twitch.tv (likely twitch just using the same hosts for video and ads, but who knows).

Edit: If that is indeed the case, I’m not looking forward to maintaining another pihole offsite. Ehhh.


Longhorn isn’t just about replication (which is not backup, and RAID is not backup either). Also if you only have one replica, is it even different from local storage at this point?

You’d use longhorn to make sure applications don’t choke and die when the storage they are using go down. Also, I’m not sure if you can supply longhorn storage to nodes that don’t run it. I haven’t tried it.

I suspect all pods that you’d define to use longhorn would only go up at the longhorn replica node.

All this is how I understand longhorn works. I haven’t tried it this way, my only experience is running it on every node so if one node goes down pods can just restart on any other node and carry on.



I’ve been running pihole with several blocklists forever and I never see any ads.


A manager that can’t read a simple try catch commit? Why am I surprised.


Airlines barely do anything by themselves. All ground services are provided by contractors, so why not wheelchairs.


I’ve been using podman instead of Docker for a couple years now. I’m not a heavy user, but it doesn’t ever break for me and I appreciate the pods and ease of turning pod config into a kubernetes deployment.



Well. That’s what I get for using a service without giving them my email and not checking their blog.


Did they change something? I’ve been port forwarding for a couple of years now.


This literally saved me from clicking away with a sensible chuckle. There goes my evening.