• 0 Posts
  • 89 Comments
Joined 1Y ago
cake
Cake day: Jun 18, 2023

help-circle
rss

Can you make your docker service start after the NFS Mount to rule that out?

A restart policy only takes effect after a container starts successfully. In this case, starting successfully means that the container is up for at least 10 seconds and Docker has started monitoring it. This prevents a container which doesn’t start at all from going into a restart loop.

https://docs.docker.com/engine/containers/start-containers-automatically/#restart-policy-details

If your containers are crashing before the 10 timeout, then they won’t restart.


Doesn’t even startup on my box, but doesn’t crash the kernel or system either, just regular application crash


Kernel shouldn’t crash, and anything running in memory will be okayish, but it definitely will get less and less stable. It won’t be possible to start new processes.

I have a Linux install on a USB SSD with a flakey connection, if I bumped the cord the root would unmount. It was fairly resilient, but graphics would slowly start disappearing. I’m fairly sure I could cleanly reboot as long as I had a terminal open, but its been a while, so maybe I’m misremembering.

Still, the overall system becomes pretty useless, so i guess its fair to call it a crash


There are rust libraries to send signals, might be better to use those rather than calling bash. eg. https://docs.rs/nix/latest/nix/sys/signal/index.html

I’m guessing if input was “”, then it would sigkill all processes? Less confident, but some functions behave slightly differently in an interactive console vs a non interactive, maybe ps has a different format when used non interactively?

Aside, you want three backticks and a newline to get code formatting :)

Ah, that definitely would feel like a crash. Sent kill signal to cgroup accidentally? Or just iterate over all processes and signal them all?


OPs example was task management, which doesn’t require kernel modules.


Doesn’t explain OPs task management example. And won’t crash the kernel, just make things unresponsive


That won’t crash your kernel, and I was more curious about the OPs example. Task management is basically reading some files, and sending signals, it should be near impossible to crash the system.


How are you crashing your system?! Crashing program sure, but the entire system?


I think its better to keep your gateway basic, and run extra services on a separate raspi or similar. Let your router/gateway focus on routing packets.


Openwrt can run Adguard, and as long as your gateway can run docker, you can probably get pihole working.


Unsafe doesn’t let you just ignore the borrow checker, which is what generally tripped me up when learning to write rust.


For openwrt+wireguard, see: https://cameroncros.github.io/wifi-condom.html

Looks like tailscale should work in openwrt: https://openwrt.org/docs/guide-user/services/vpn/tailscale/start

For the wireguard server, I am using firezone, but they have pivoted to being a tailscale clone, so I am on the legacy version, which is unsupported: https://www.firezone.dev/docs/deploy/docker

Edit: fixed link



Yup, AI is conceptually very broad. You could argue pong has an AI, the other paddle acts on its own and makes decisions similarly to a human? Cows in Minecraft? CS bots?

You could also argue that Minecraft world generation isn’t too dissimilar from how image generators work. Both take a set of rules and then use math to generate an output.

I think I can accept generative AI (voice models/artwork) depending on the game. If a 1 person indie dev uses it, because they have no other options, fine. AAA game just trying to save a buck, nah.



That is likely a speed test server within the same data center as your vps, or they have special traffic shaping rules for it.

Try using iperf from your local box to the VPS and see what speeds you get


Never heard that term, but its a very obscure concept, so wouldn’t surprise me if it had multiple names. Probably vender specific names?

Seems quite a few people havent heard of it, hence a lot of the split DNS answers :/


I can’t remember exactly what its called, but something like router NAT loopback is what you want. I’ll have a look around. But if you set it right, things should work properly. It might be a router setting.

Found it: https://community.tp-link.com/en/home/stories/detail/1726


4 cores is a bit limiting, but definitely depends on the usage. I only have 1 VM on my NUC, everything else is docker.

I thought all the core processors had VT* extensions, I was using virtualization on my first gen i7. They are very old an inefficient now though.


I5 3470 is old, but its not that bad. Lots of people are homelabing on NUCs which are only very slightly faster. Performance per Watt will be terrible though. (I am on an i7-10710u, and I’ve yet to run out of steam so far - https://cpu.userbenchmark.com/Compare/Intel-Core-i7-10710U-vs-Intel-Core-i5-3470/m900004vs2771 )

It has VTx/VTd, so should be okay for proxmox, what makes you think it won’t work well?


Homeassistant is another option. Host the server and run the app on your phone. Its not very granular though, and the user interface is not great


Here in Aus, this is how the NBN is provided in some areas, there is a NBN coax-to-ethernet box, and then you can plug in your own router.

There is always a chance that your ISP is doing something weird that prevents that working, but I think it should be fine.


I enjoyed Luigis Mansion 3. Never tried the previous games, but this one was fun. Has coop as well which you and your son may enjoy.


Its not, but if the value of the data is low, its good enough. There is no point backing up linux isos, but family photos definitely should be properly backed up according to 3-2-1.


It depends on the value of the data. Can you afford to replace them? Is there anything priceless on there (family photos etc)? Will the time to replace them be worth it?

If its not super critical, raid might be good enough, as long as you have some redundancy. Otherwise, categorizing your data into critical/non-critical and back it up the critical stuff first?


Sorry, wasn’t meant to be condescending, you just seem fixated on file size when it sounds like RAM (and/or CPU?) is what you really want to optimise for? I was just pointing out that they arent necessarily correlated to docker image size.

If you really want to cut down your cpu and ram, and are okay with very limited functionality, you could probably write your own webserver to serve static files? Plain http is not hard. But you’d want to steer clear of python and node, as they drag in the whole interpreter overhead.


RAM is not the same as storage, that 50mb docker image isn’t going to require 50mb of ram to run. But don’t let me hold you back from your crusade :D


Having PHP installed is just unnecessary attack surface.

Are you really struggling for space that 50mb matters? An 8gb usb can hold thar 160x?


Just go nginx, anything else is faffing about. Busybox may not be security tested, so best to avoid on the internet. Php is pointless when its a static site with no php. Id avoid freenginx until its clear that it is going to be supported. There is nothing wrong with stock nginx, the fork is largely political rather than technical.


We’ve all.committed that sin before. Its better to rely on it surviving the reboot than to try prevent the reboot.

Also worth looking into some form of uptime monitoring software. When something goes down, you want to know about it asap.

And documenting your setup never hurts :D


Did the services fail to come back due to the bad reboot, or would they have failed to come back on a clean reboot? I ugly reboot my stuff all the time, and unless the hardware fails, i can be pretty sure its all going to come back. Getting your stuff to survive reboot is probably a better spend of effort.



Yeah, or sprint to your colleague and ask them to force push their branch again :D

Another tactic for for getting clean git commits is to do all your messy commit work in a scratch branch, and then when your happy, create a new branch, and with meld, organise your changes into complete logical commits. We do that a little bit.


I do almost exactly that workflow as well, but I just know its bitten me before. Protecting main/dev is fine, but I have also accidentally force pushed to the wrong branch and wiped out its work as well.

Muscle memory + Fatigue == Bad time :/


I think a lot of these are opinions stated as facts.

The nitpicking one seems to be using a different definition of “nitpick”. To me, a nitpick is to pick on something entirely meaningless (eg. Fullstops at end of comments, slight incorrect variable names, code alignment). If i see a review full of those I assume the reviewer skipped the correctness checks, and phoned in the review.

The git push --force is definitely a controversial suggestion, im personally happy with doing that, but I have also personally accidentally force pushed dev/main and seen others do it. Squash on merge is probably a safer habit to have. Also, gitlab and bitbucket both get a bit confused if you forcepush to a branch that is part of a MR.

Reviewer fixing problems is also situational. For open source stuff, if you rely on the submitter, youll frequently jusy end up with an abandonned PR. For team stuff, the original author may have already moved on to another ticket, so pushing it back may stretch out the development cycle and cause the code to become stale, and potentially unmergable. Our solution is to just communicate. “This is wrong, I am going to fix and merge. Cool?”

The article is very light on how to actually review for correctness, which in my experience is the thing people struggle with most. Things to look for (Non-exhaustive):

  • C: Allocations, and deallocations.
    • Are there leaks in any codepaths?
    • Are scopes used correctly?
  • API usage: return values checked? API called correctly? Safe API should be used over unsafe
  • Thread safety: Are there locks? If yes, focus on these paths, locks are hard to get right. If not, is there anything that should be protected? Some APIs are not threadsafe.
  • Loops: Are bounds correct, do they terminate correctly.
  • Comments: Do the match the code? Do they add value? (This is subjective, and down to team preferences)

I think they dropped their original plan for a unibody design, and there is a regular chassis underneith. That also could be rusting, but invisibly :)


If this is your kind of thing, you will enjoy doing crypto ctf challenges. There are a few RNG reversing challenges.

I hate them tbh, to much math, not enough brain :(


I had something manual setup originally as well, but it became a bit of a maintenance hassle. Moving configs to devices was a bit of a pain, and generating keys wasnt easy.