• 1 Post
  • 82 Comments
Joined 1Y ago
cake
Cake day: Jun 10, 2023

help-circle
rss

I’m curious if this will improve DLC mismatches. For example, I’ve purchased most of the map DLCs for Euro & American Truck Simulator, but my wife only purchased the base game.

By memory she previously could access all of the DLC via library sharing until she purchased it, then she could only access the base game and not the shared DLC. It’s probably cleanest to keep it that way since you never know how different games handle DLC being activated and de-activated within an existing save, but it would be nice to not punish someone for playing a game with DLC via library sharing then purchasing the game for themselves and buying DLC later


For public facing only use key based authentication. Passwords have too much risk associated for public facing ssh


In the context of Runescape this is just a hellish mess, because its ultimately a codebase from the late 90s with graphics created everywhere from the early 00s to the mid 20s. Oh and as an MMORPG anyone who was a player but stops playing is a lost sale so no pressure at all


The Sims 4 actually added a similar approach to character creation about 2 years ago, but very different kind of game with a very different market

Off the top of my head it has options for male presenting body type, female presenting body type, sliders for fat and muscle (and you can generally reshape most of the body) and the available clothing and hairstyles got sorted into masculine and feminine with I believe more traditionally gender neutral stuff getting placed into both, then for biological purposes there’s “can pee standing up/cannot pee standing up” and “can impregnate/can be impregnated” It defaults to Male/Female defaults but makes it easy to customize, and a good mix of default townies (NPCs) are all over the spectrum.

They also recently added more complex relationship and romance preferences, so sims can be sexually bi but romantically straight for example, but also expanded to allow various levels of openness to relationships as well as poly relationships



so all rules would have to be applied on the Pi itself

Sounds like you’ll want to setup IPTables


A big part of it comes from the security model and Linux historically being a multi-user environment. root owns the root directly / which is where all of the system files live. A normal user just has access to their own home directory /home/username and read-only access to things the normal user needs like the /bin where programs are stored (hence /bin/bash in lots of bash scripts, it tells the script what program to run the script from)

Because of this model, a normal user can only mess up their own files, while root can mess up everyone’s files and of course make the system non-bootablem. But also you can have user Bob signed in and doing stuff but unable to access user Alice’s files, and user Alice can be doing stuff and even running the same programs that user Bob is running (since it’s read only there’s no conflict) and then the administrator can log in as root to install something because they got a ticket to install suchandsuch for soandso.

Back to your point with sudo, sudo is Super User Do, so you are running a single command as root. By running it as root you can potentially be messing up with Alice and Bob might be doing, and most importantly whatever you are running with sudo can potentially affect any file on the computer. So if you run the classic rm -rf / it will delete every file that the user has write access to, so if bob runs it it’ll delete all of /home/bob/ but Alice will be unaffected, and the admin can still log in as root to do stuff. But if you run it as root you’ll quickly find the server unable to boot and both Alice and Bob will be very upset that they can’t access the server or their files

If you host a website you’ll generally take advantage of this by giving the www folder read-only access so that web users can only see webpages and can’t start reading random system files, or for server software you can create a dedicated user to run that server software as, so if someone were to somehow exploit a vulnerability and gain access to that server user they can only mess up the software and no system files


I got one for work. It literally just pastes into ChatGPT


I have memories of some random afternoons at the consulting firm my mom worked at, where everyone’s just poking at spreadsheets. I can’t imagine how cool the memory of going into the server farm and doing some hardware work there would be


Folder structures are a bizarre thing for many people

When learning about this I learned that in the analog days folks would actually put physical folders inside of physical folders and it both makes tons of sense and is mind blowing at the same time. -Late Millennial born to IT parents


With the amount of password resets I have to do at work, I can’t say I’m shocked



Huh! Thank you very much for the detailed answer that’s extremely interesting!



You should NOT have a WG tunnel from the home network to the VPS with fully unrestricted access to everything.

This is what I came here to make sure was said. Use your firewall to severely restrict access from your public endpoint. Your wiregaurd tunnel is effectively a DMZ so firewall it off accordingly


The really nice thing about tailscale for accessing your hosted services is absolutely nothing can connect without authentication via a professionally hosted standard authentication, and there’s no public ports for script kiddies to scan for, spot and start hammering on. There’s thousands of bots that do nothing but scan the internet for hosted services and then try to compromise them, so not even showing up on those scans is a good thing.

For example, I have tailscale on my Minecraft server and connect to it via tailscale when away from home. If a buddy wants to join I just send a link sharing the machine to them and they can install tailscale and connect to it normally. If for some reason buddy needs to be cut off, I can just stop sharing to that account on Tailscale and they can no longer access the machine.

The biggest challenge of tailscale is also it’s biggest benefit. Nothing can connect without connecting through the tailscale client, so if my buddy can’t/won’t install tailscale they can’t join my Minecraft server


No problem! I’m just an information sponge and I’ve lucked out with really good mentors so far in my career to learn from


So from my experience you generally will have different zomes of security. Outside Internet is obviously entirely untrusted so block every incoming connection except those you really need, and even then ideally all remain blocked (especially for a home network). Then you generally have your guest network which might need access to some hosted resources but is largely just used by guests to connect to the internet, next is your client network where your computer likely lives which probably gets access to all hosted resources but no management access (or depending on how much you want to trust your primary PC, limit that to just your main PC) and finally your datacenter network where you hopefully trust everything running in there.

You generally work with these zones and write rules based on the zone the traffic is coming from, with some exceptions, such as I might not want to give the guest network any access to my data center network, except for access to my jellyfin so I’ll create a rule allowing only tcp web traffic from that network to a specific port on a specific IP/hostname.

A common way to achieve this is with a DMZ network, a network that sits between all of your networks and relies heavily on routing and firewalls. Public services and routers get IP addresses on the DMZ, and your firewall only allows specific paths. The outside Internet can open connections to the web ports of the web server and nothing else, the web server can’t open connections to your other networks, only specific machines/networks are allowed to access the SSH port of the web server, etc. the DMZ is where trusted and untrusted connections mix, hence why its named after the zone that belongs to both North and South Korea where both are allowed but also neither are allowed, where one only goes with specific purpose and explicit permission

I was a bit hesitant to do firewall rules based off of IP addresses, as a compromised host could change its IP address

Realistically any identifier you can write firewall rules based off of can be forged in some way. A rogue machine can change it’s host name, IP address and MAC address (and many do randomize their MAC address these days) in enterprises this is generally mitigated through limiting a network to only Ethernet access or via 802.1X authentication on WiFi and potentially even Ethernet. (You can also take the approach of MAC address whitelists, and some switches even allow for “sticky” MAC addresses where the first MAC address that connects is whitelisted until either the switch is rebooted or an administrator explicitly clears/allows the MAC address)

However, if each host is on its own VLAN, then I could add a firewall rule to only allow through the 1 “legitimate” IP per VLAN

You could go crazy and do everything at L3 (which your idea is basically doing but with extra steps) but that sounds like far more effort than it’s worth, since now you’re making every client also act as a router, and you lose a ton of efficiency both in configuration and in routing & switching, plus you’ve now changed the type of threats you’re vulnerable to.

Generally in the enterprise, risks like what you’re trying to mitigate are handled through reporting. An automated alert email is sent when a new device connects to a network that should never have new devices connect to it, then you kill the connection and verify with the team of that was any of them and investigate if it wasn’t.

Realistically as a home network your threat model is automated scripts and maybe a script kiddie trying to get in. You really just need higher than average security to mitigate such a threat model (and average security is a shit show)

I feel like I may have to allow a couple CT/VMs to communicate without going through the firewall simply for performance reasons. Has that ever been a concern for you?

Security is always a trade off of convenience and speed. You have to decide what is an acceptable compromise between security and efficiency

Generally anything virtual when you aren’t sure what to do, you should look at what the physical solution would be. For example, network storage is very bandwidth intensive, latency sensitive and security intensive. This is usually secured at the physical level as a separate network with no routers so that most security can be disabled. So at the virtual level these would be tackled with a separate virtual network connected to a second interface, and firewall rules on other interfaces to disallow incoming and outgoing connections to the storage network

Edit: I just realized I never answered your first question. In short, from what I’ve seen most enterprises put one firewall from a vendor like Fortinet, Zscaler, Palo Alto, etc. right on the edge of the network closest to the internet then either entirely rely on that for firewall or rely on that for firewalling off the outside Internet then do additional firewalling with a different tool inside the network. For example, a bank I worked at had a pair of redundant L3 switchs (Nexus N9ks specifically) which handled all of the routing for all of the bank’s networks, and connected between those and the internet was the Fortinet box which was managed by an outside vendor and while i was there as part of hardening ahead of a scheduled red team audit we setup firewall rules (I’m blanking on the Cisco term for it, but they’re ultimately just firewall rules) on the L3 switches to limit access to more sensitive networks and services


It really sounds like you need to dive into firewall rules. Generally you lean on your firewall to allow and restrict access to services. Probably the easiest place to start is to setup pfsense/opnsense since it has a really clean interface for setting up rules. Proxmox’s built in firewall is nice too, but configuring the firewall per VM would probably get annoying and difficult after a while

And as you learn more about firewalls learning how subnetting works will allow for more efficient rules (for example, if you have 192.168.0.0/23 192.168.2.0/24 and 192.168.3.0/ 24 for your networks that you’re allowing traffic to/from you can just enter one firewall rule for 192.168.0.0/22 rather than 3 separate rules)


I haven’t been so lucky. Grandma gave our first child both a phone and a tablet before she was 2 (against our wishes) and lets her have full unsupervised access to whatever she wants to watch. My wife now is a stay at home mom to keep Grandma’s influence limited (she also plays fast and loose with regards to safety)

But back on topic, we usually ask the kids what they want to watch, and if we feel they’ve been watching too much trash television we’ll say “let’s watch something on PBS Kids” and let them pick something on PBS to watch (because that’s our go to for higher quality kids content) so about 70% of their screen time is on high quality content and the other 30% is their choice.


I already said in the original post I plan on sellong off and giving away ~15 of them, keeping a few as spares, and only actually leaving one on 24/7

bare metal machines which take IP addresses, against just running it in VM’s which have IP addresses

Both bare metal and VMs require IPs, it’s just about what networks you toss them on. Thanks to NAT IPs are free and there’s about 18 million of them to pick from in just the private IPv4 space

Big reason for bare metal for clustering is it takes the guess work out of virtual networking since there’s physical cables to trace. I don’t have to guess if a given virtual network has an L3 device that the virtual network helpfully added or is all L2, I can see the blinky lights for an estimate as to how much activity is going on on the network, and I can physically degrade a connection if I want to simulate an unreliable connection to a remote site. I can yank the power on a physical machine to simulate a power/host failure, you have to hope the virtual host actually yanks the virtual power and doesn’t do some pre shutdown stuff before killing the VM to protect you from yourself. Sure you can ultimately do all of this virtually, but having a few physical machines in the mix takes the guesswork out of it and makes your labbing more “real world”

I also want to invest the time and money into doing some real clustering technologies kinda close to right. Ever since I ran a ceph cluster in college on DDR2 era hardware over gigabit links I’ve been curious to see what level of investment is needed to make ceph perform reasonably, and how ceph compares to say glusterFS for example. I also want to setup an OpenShift cluster to play with and that calls for about 5 4-8 core 32GB RAM machines as a minimum (which happens to be the maximum hardware config of these machines). Similar with Harvester HCI

It just takes a lot of extra power and doesn’t achieve much

I just plan on running all of them just long enough to get some benchmark porn then starting to sell them off. Most won’t even be plugged in for more than a few hours before I sell them off

there is no real reason to do this and I don’t understand so many people hyping it up.

Because it’s fun? I got 25 computers for a bit more than the price of one (based on current eBay pricing). Why not do some stupid silly stuff while I have all of them? Why have an actual reason beyond “because I can!”

25 PC’s does seem slightly overkill. I can imagine 3-5 max.

25 computers is definitely overkill, but the auction wasn’t for 6 computers it was for 25 of them. And again, I seriously expected to be out of and the winning bid to be over a grand. I didn’t expect to get 25 computers for about the price of one. But now I have them so I’m gonna play with them


I won’t be leaving all of them on for long at all. I’ve got a few basically unused 15A electrical circuits in the unfinished basement (can see the wires and visually trace the entire runs) I’ll probably only run all 25 long enough to run a linpack benchmark and maybe run some kind of AI model on the distributed compute then start getting rid of at least half of them


Although he’d also need 25 monitors lol

Back to the government auctions then!


12 cents per kilowatt-hour. I certainly don’t plan on leaving more than a couple on long term. I might get lucky with the weather and need the heating though :)


State government, and it says they come with SSDs. They came from a school so presumably they’re from a lab or are upgraded staff PCs, both would be pretty low sensitivity. Maybe I’ll learn the final test answers for Algebra 1 at worst!

Might be fun to do some forensic data recovery and see if anything was missed though


I think you’re not giving 4th gen enough credit. My wife’s soon-to-be-upgraded desktop is built on a 4th gen i5 platform, and it generally does the job to a decent level. I was rocking a 4790k and GTX970 until 2022, and my work computer in 2022 was on an even older i5-2500 (more held back by the spinning hard drive than anything. Obviously not a great job, but I found something much better in 2022) my last ewaste desktop-turned-server was powered by an i5-6500 (which is a few percentage points better performance than the 4th gen equivalent) and I have a laptop I use for web browsing and media consumption that’s got a 6700HQ in it.

I’ve already got a few people tentatively interested, and I honestly accepted the possibility of having to pay to recycle them later on. Should be a fun series of projects to be had with this pallet of not-quite-ewaste


This is pretty high on the to-do list. I plan on virtualization a bunch of it, but it would be pretty easy to have one desktop hosting each subnet of client PCs and one hosting the datacenter subnet. Having several hosts to physically network means less time spent verifying the virtual networks work as intended.

Also playing with different deployment tools is a goal too. Having 2-3 nearly-identical systems should be really useful for creating unified Windows images for deployment testing


The thought did cross my mind to run Linpack and see where I fall on the Top500 (or the Top500 of 2000 for example for a more fair comparison haha)


From the listing photos these actually have half-height expansion slots! So GPU options are practically nonexistant, but networking and storage is blown wide open for options compared to the miniPCs that are more prevalent now.


4th gen intel i5s, 8GB of RAM and 256GB SSDs, so not terrible for a basic Windows desktop even today (except of course for the fact that no supported Windows desktop operating system will officially support these system come Q4 2025)

But don’t get your hopes up, when I’ve bid on auctions like this before the lots have gone for closer to $80 per computer, so I was genuinely surprised I could win with such a low bid. Also every state has entirely different auction setups. When I’ve looked into it in the past, some just dump everything to a third party auction, some only do an in-person auction annually at a central auction house, and some have a snazzy dedicated auction site. Oh and because its the US, states do it differently from the federal government. So it might take some research and digging around to find the most convenient option for wherever you are (which could just be making a friend in an IT department somewhere that will let you dumpster dive)


I just won an auction for 25 computers. What should I setup on them?
I placed a low bid on an auction for 25 Elitedesk 800 G1s on a government auction and unexpectedly won (ultimately paying less than $20 per computer) In the long run I plan on selling 15 or so of them to friends and family for cheap, and I'll probably have 4 with Proxmox, 3 for a lab cluster and 1 for the always-on home server and keep a few for spares and random desktops around the house where I could use one. But while I have all 25 of them what crazy clustering software/configurations should I run? Any fun benchmarks I should know about that I could run for the lolz? Edit to add: Specs based on the auction listing and looking computer models: - 4th gen i5s (probably i5-4560s or similar) - 8GB of DDR3 RAM - 256GB SSDs - Windows 10 Pro (no mention of licenses, so that remains to be seen) - Looks like 3 PCIe Slots (2 1x and 2 16x physically, presumably half-height) Possible projects I plan on doing: - Proxmox cluster - Baremetal Kubernetes cluster - Harvester HCI cluster (which has the benefit of also being a Rancher cluster) - Automated Windows Image creation, deployment and testing - Pentesting lab - Multi-site enterprise network setup and maintenance - Linpack benchmark then compare to previous TOP500 lists
fedilink

If somebody told me five years ago about Adversarial Prompt Attacks I’d tell them they’re horribly misled and don’t understand how computers work, but yet here we are, and folks are using social engineering to get AI models to do things they aren’t supposed to


This has been big on some of AMD’s workstation and server chips because Windows generally doesn’t know what to do with the unexpected NUMA Node layouts. Or the scheduler just can’t handle 128 cores. So abstracting that away with Linux’s superior scheduler can help significantly on certain hardware


I mean the flip side of this is that by doing nothing you’re letting them write the narrative. I feel like whatever this mess is, it’s starting to grow, so a more legitimate source calling it what it is can be helpful


This is a great writeup of errata related to this configuration! I am curious what kind of performance you’re seeing for DNS requests considering how old and anemic the first gen Pi is


Containers are great for consistency with web services, and help with avoiding (and providing an easy rollback in the case of) breakage with updates


Thunderbird has full MFA support for M365 accounts. It has to open the authentication page in a little window and I think has a shorter period to reauthenticate but otherwise works fine


That was actually just added in a recent update like a month ago! I’ve started using it a lot at work, but annoyingly you still have to go to the dedicated tasks webapp for full edit capabilities


I’ve been using it at work and its fine. Honestly, Outlook has no many strange features that date back a decade or two that have very few users that it makes sense to purge a bunch of them in favor of reducing technical debt


This is the value I see in AI is letting human agents work way faster. An AI which is trained on your previous human-managed tickets and suggests the right queue, status and response but still allows the human agents to ultimately approve or rewrite the AI response before sending would save a mountain of work for any kind of queue work and chat support work