Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @dan@d.sb

  • 5 Posts
  • 679 Comments
Joined 1Y ago
cake
Cake day: Jun 14, 2023

help-circle
rss

Depends on if they’re the same speed. Faster memory has shorter clock cycles.

It doesn’t really matter too much though. The system should just run all the memory at the slowest speed/timings.


It’s a much smaller scale but I use a Coral TPU with CodeProject AI to detect when people or animals are in front of my house. Works well with Blue Iris (NVR software for security cameras). I like it. That’s all the self-hosted AI I’ve got for now.


I wasn’t disagreeing with you :) or at least I think I wasn’t. I was just quoting the RFC you linked to.


GRUB does, but Raspberry Pis don’t use GRUB by default. You should be able to install it, but it’s not officially supported.


it makes connecting to localhost as easy as http://0:8080/ (for port 8080, but omit for port 80).

The thing is that it’s not supposed to work, so it’s essentially relying on undefined behaviour. Typing [::1]:8080 is nearly as easy.

skimming through these PRs, at least for WebKit, I don’t see tests for shorthand IPs like 0 (and no Apple device to test with). What are the chances they missed those…?

I haven’t seen the PRs, but IP comparison should really be using the binary form of the IPv4 address (a 32-bit number), not the human-friendly form.


The issue is that I don’t think its standard bootloader supports booting from RAID. I guess you could use a MicroSD for booting then have everything else on the RAID1. There’s unofficial ways to boot using GRUB, which should work with RAID1 too.


From that RFC:

0.0.0.0/8 - Addresses in this block refer to source hosts on "this"
network.  Address 0.0.0.0/32 may be used as a source address for this
host on this network; other addresses within 0.0.0.0/8 may be used to
refer to specified hosts on this network ([RFC1122], Section
3.2.1.3).

(note that it only says “source address”)

which was based on RFC 1122, which states:

We now summarize the important special cases for Class A, B,
and C IP addresses, using the following notation for an IP
address:

    { <Network-number>, <Host-number> }

or
    { <Network-number>, <Subnet-number>, <Host-number> }

...

(a)  { 0, 0 }

This host on this network.  MUST NOT be sent, except as
a source address as part of an initialization procedure
by which the host learns its own IP address.

See also Section 3.3.6 for a non-standard use of {0,0}.

(section 3.3.6 just talks about it being a legacy IP for broadcasts - I don’t think that even works any more)


Seems like a TCP/IP stack issue rather than a browser issue… 0.0.0.0 is not supposed to be a valid address (in fact, no IPv4 address with 0 as the first octet is a valid destination IP). The network stack should be dropping those packets.

0.0.0.0 is only valid in a few use cases. When listening for connections, it means “listen on all IPs”. This is a placeholder that the OS handles - it doesn’t literally use that IP. Also, it’s used as the source address for packets where the system doesn’t have an IP yet (eg for DHCP). That’s it.


Get a USB to SATA cable. This one works great with the Pi: https://a.co/d/8Jv2Erj

Instead of having a warm spare, a better solution is to attach two drives and use them in a RAID1 config. Unfortunately, I don’t think the Pi supports RAID1.


Get a used minipc

Why, when the Pi is working fine? Just get a USB to SATA cable. This one works great with the Pi: https://a.co/d/8Jv2Erj


Dendrite is still in beta and isn’t feature-complete. I tried all three (Synapse, Dendrite and Conduit) and Conduit worked the best for me - I found it to be the most reliable and use the least amount of RAM. It also uses an embedded database (RocksDB) which makes setup a bit easier.

I tried joining several large Matrix rooms from my server, and the experience with Synapse was dreadful. It was using 100% of one core for long periods of time. In some cases it would just fall over and not join the room. Dendrite and Conduit are better in that regard.

Conduit’s weak point is its documentation. I had to read Synapse’s documentation to understand a few key concepts. I’ve been meaning to help write docs for Conduit but just haven’t had time. I’ve got a PR to improve the styling of the docs at least, but need to do some tweaks to it.


Protip: Use Conduit instead of Synapse. It’s significantly lighter than Synapse, easier to run, and I guess you can be a cool kid by running something written in Rust. The documentation is even worse though :/ https://conduit.rs/


At the end of the day, someone has to pay for it. Either the users pay, or Immich’s developers pay, or a map provider pays (by offering it for free and covering the costs).


Ahh, I see. I’m usually on desktop PCs and use solar power at home, so the efficiency is less of a concern.

AFAIK the “auto dark mode” in Chrome is experimental and doesn’t work well on all sites. Have you tried Dark Reader on Firefox? More and more sites are adding native dark mode, too.


Oh yeah, that’s a good catch. Hosting their own proxy/CDN in front of OSM should be doable though.


What performance issues do you have with Fx? I use it daily and don’t really feel like it’s slower than Chrome.


Wondering the same thing… I’ve been meaning to try it.

I’m using PhotoStructure at the moment. It’s not as feature-rich, and the best features are only available on paid subscriptions, but it’s a solid, reliable piece of software. That’s what I want - a focused piece of software that favours stability over feature creep. Its deduplication is the best I’ve seen. The developer works on it full time, which is one of the reasons it has paid subscriptions (to make that sustainable).


they were causing too much load on OSM’s servers.

They could host their own caching proxy between OSM and their users though.

Also, Home Assistant uses OpenStreetMap and they have more users than Immich does.

Edit: Home Assistant does use OSM data, but they use it via another third-party called CARTO, who at least have a proper site: https://carto.com/basemaps. Tiles come from URLs like https://basemaps.cartocdn.com/rastertiles/voyager/12/657/1580@2x.png


He had a very high rate, ~50% of CPUs in systems that he looked at were affected

Note that I think this was with the data center samples, which run the systems 24/7. The prevalence isn’t as high with regular consumer use (but still way too high). The data centers also didn’t have any problems at all with the 12900K.


This is what I do:

  • Stuff that’s critical runs on VPSes running Debian stable. Things like my websites, email, authoritative DNS, etc. The VPS providers I use have nicer hardware than me (modern AMD EPYC servers, enterprise NVMe drives in RAID10 with warm spares, 40Gbps networking, etc)
  • Other stuff is on a home server running Unraid. It has a Core i5-13500 with a W680 motherboard, 2 x 2TB NVMe drives in ZFS mirror, 2 x 20TB Seagate Exos drives in ZFS mirror for data storage, and 1 x 14TB WD Purple Pro for security camera recordings.
  • I have a Raspberry Pi with a few things on it, like a second copy of my recursive DNS server, AdGuard Home (so the internet doesn’t break if I need to shut down “the main server”).

I was thinking of running several servers at home, but right now I’m just running one main one. I don’t have much space and it’s running fine for me for now. Power is expensive here. I’ve got solar power, but I get 1:1 credits for excess solar power, so I’d rather save it for other things.


It’s a DRM system called Widevine, that’s currently maintained by Google. It ships in Chromium/Chrome and Firefox as a closed-source binary blob. Firefox asks you before running it, since you may not want to run proprietary code.

You can tell Firefox not to run it, or disable it in the plugins. Instructions are here: https://support.mozilla.org/en-US/kb/enable-drm#w_disable-the-google-widevine-cdm-without-uninstalling. Regular videos will still play properly, but videos with DRM will fail to play. Note that practically every paid streaming service uses Widevine DRM, so if you disable it, none of them will work any more.

Plex are likely using it for their free streaming content. I’d guess that they’ve licensed it only for streaming, and need to enforce that users can’t download or record it.

Your own Plex content does not use DRM, so if you’re only using Plex for your own content, it’s fine to deny Widevine from running.


Apparently MacOS apps can be sandboxed and store data securely such that no other apps can access it, in an encrypted format. I wasn’t aware of this either, but it’s been a while since I’ve used MacOS. It sounds like the ChatGPT app explicitly opted out from this sandboxing model.


On Linux, input-remapper usually works pretty well to remap the extra buttons. I wonder if it’d work on this AI button.


In E2E tests you should ideally be finding elements using labels or ARIA roles. The point of an E2E test is to use the app in the same way a user would, and users don’t look for elements by class name or ID, and definitely not by data-testid.

The more your test deviates from how real users use the system, the more likely it is that the test will break even though the actual user experience is fine, or vice versa.

This is encouraged by Testing Library and related libraries like React Testing Library. Those are for unit and integration tests though, not E2E tests. I’m not as familiar with the popular E2E testing frameworks these days (we use an internally developed one at work).


In an alternate reality, we’d all be using JSSS, which was even worse. https://en.wikipedia.org/wiki/JavaScript_Style_Sheets


I self-host mine using Mailcow, but I use an outbound SMTP relay for sending email so I don’t have to deal with IP reputation. L


I solved this by installing solar panels. They produce more electricity than I need (enough to cover charging an EV in when I get one in the future), and I should break even (in terms of cost) within 5-6 years of installation. Had them installed last year under NEM 2.0.

I know PG&E want to introduce a fixed monthly fee at some point, which throws off my break-even calculations a bit.

Some VPS providers have good deals and you can often find systems with 16GB RAM and NVMe drives for around $70-100/year during LowEndTalk Black Friday sales, so it’s definitely worth considering if your use cases can be better handled by a VPS. I have both - a home server for things like photos, music, and security camera footage, and VPSes for things that need to be reliable and up 100% of the time (websites, email, etc)


I think it’s so people here can give themselves a pat on the pack for self hosting lol.

Like how the Linux Lemmy community has so many “Windows is bad, Linux is good” posts. Practically everyone in there already knows that Linux is good.


What is a “top” story on Lemmy, given everyone subscribes to different communities? Is it the most popular across all communities?


Samsung have gotten better with updates. In 2021, they promised all new models would receive four years of updates (which helped the industry because other brands started matching them), and they bumped it to seven years with this year’s S24 series.

Samsung and LG appliances are interesting things. Some are horrible like their fridges (which are some of the worst available today), but some are fantastic like LG’s washing machines (which rank #2 in reliability behind Speed Queen).


They’re copying Apple, which has similar clauses. They’re all going to copy Apple, unfortunately. Say that they support independent repair stores, but in reality place so many restrictions and requirements on them.


Syslog isn’t really overkill IMO. It’s pretty easy to configure it to log to a remote server, and to split particular log types or sources into different files. It’s a decent abstraction - your app that logs to syslog doesn’t have to know where the logs are going.


The amount of clients that are missing basic events like "you’ve run out of disk space

For my personal servers, I use Netdata for this. Works pretty well.


Software that runs on embedded systems usually benefits from being small, too.


I didn’t think any JSON parsers used regex given how simple the grammar is… but I’ve seen some horrors, so I shouldn’t rule it out.



Did you try the first app I linked to? I can’t try it since I’m away from my computer for a few days.



What type of data are you looking for? Does http://www.nirsoft.net/utils/network_usage_view.html suit your use case? There’s similar data somewhere in the modern settings app too.

There’s also performance counters for real time data (bytes sent and received): https://learn.microsoft.com/en-us/windows-server/networking/technologies/network-subsystem/net-sub-performance-counters. You can use these in any tool that supports performance counters. There’s an app that comes with Windows called Performance Monitor that can read these counters.


A proxy is no less secure than a VPN, assuming it’s using encryption like TLS. It’s not as good for torrents since you can’t port forward, but fundamentally people that use commercial VPNs are using then just like a proxy. Some providers like NordVPN do offer HTTPS proxies in addition to their VPN service.


Lighter weight replacements for Sentry bug logging
I love Sentry, but it's very heavy. It runs close to 50 Docker containers, some of which use more than 1GB RAM each. I'm running it on a VPS with 10GB RAM and it barely fits on there. They used to say 8GB RAM is required but [bumped it to 16GB RAM](https://github.com/getsentry/self-hosted/pull/2585) after I started using it. It's built for large-scale deployments and has a nice scalable enterprise-ready design using things like Apache Kafka, but I just don't need that since all I'm using it for is tracking bugs in some relatively small C# and JavaScript projects, which may amount to a few hundred events per week if that. I don't use any of the fancier features in Sentry, like the live session recording / replay or the performance analytics. I could move it to one of my 16GB or 24GB RAM systems, but instead I'm looking to evaluate some lighter-weight systems to replace it. What I need is: - Support for C# and JavaScript, including mapping stack traces to original source code using debug symbols for C# and source maps for JavaScript. - Ideally supports React component stack traces in JS. - Automatically group the same bugs together, if multiple people hit the same issue - See how many users are affected by a bug - Ignore particular errors - Mark a bug as "fixed in next release" and reopen it if it's logged again in a new release - Associate bugs with GitHub issues - Ideally supports login via OpenID Connect Any suggestions? Thanks!
fedilink

Looking for simple analytics (similar to Plausible) that supports cookies
Google Analytics is broken on a bunch of my sites thanks to the GA4 migration. Since I have to update everything anyways, I'm looking at the possibility of replacing Google Analytics with something I self-host that's more privacy-focused. I've tried Plausible, Umami and Swetrix (the latter of which I like the most). They're all very lightweight and most are pretty efficient due to their use of a column-oriented database (Clickhouse) for storing the analytics data - makes way more sense than a row-oriented database like MySQL for this use case. However, these systems are all cookie-less. This is *usually* fine, however one of my sites is commonly used in schools on their computers. Cookieless analytics works by tracking sessions based on IP address and user-agent, so in places like schools with one external IP and the same browser on every computer, it just looks like one user in the analytics. I'd like to know the actual number of users. I'm looking for a similarly lightweight analytics system that does use cookies (first-party cookies only) to handle this particular use case. Does anyone know of one? Thanks! Edit: it doesn't have to actually be a cookie - just being able to explicitly specify a session ID instead of inferring one based on IP and user-agent would suffice.
fedilink

ATX case with room for 5 hard drives
I'm replacing an SFF PC (HP ProDesk 600 G5 SFF) I'm using as a server with a larger one that'll function as a server and a NAS, and all I want is a case that would have been commonplace 10-15 years ago: - Fits an ATX motherboard. - Fits at least 4-5 hard drives. - Is okay sitting on its side instead of upright (or even better, is built to be horizontal) since it'll be sitting on a wire shelving unit (replacing the SFF PC here: https://upvote.au/post/11946) - No glass side panel, since it'll be sitting horizontally. - Ideally space for a fan on the left panel It seems like cases like this are hard to find these days. The two I see recommended are the Fractal Design Define R5 and the Cooler Master N400, both of which are quite old. The Streacom F12C was really nice but it's long gone now, having been discontinued many years ago. Unfortunately I don't have enough depth for a full-depth rackmount server; I've got a very shallow rack just for networking equipment. Does anyone have recommendations for any cases that fit these requirements? My desktop PC has a Fractal Design Define R4 that I bought close to 10 years ago... I'm tempted to just buy a new case for it and repurpose the Define R4 for the server.
fedilink

NAS vs larger server
Sorry for the long post. tl;dr: I've already got a small home server and need more storage. Do I replace an existing server with one that has more hard drive bays, or do I get a separate NAS device? ________ I've got some storage VPSes "in the cloud": * 10TB disk / 2GB RAM with HostHatch in LA * 100GB NVMe / 16GB RAM with HostHatch in LA * 3.5TB disk / 2GB RAM with Servarica in Canada The 10TB VPS has various files on it - offsite storage of alert clips from my cameras, photos, music (which I use with Plex on the NVMe VPS via NFS), other miscellaneous files (using Seafile), backups from all my other VPSes, etc. The 3.5TB one is for a backup of the most important files from that. The issue I have with the VPSes is that since they're shared servers, there's limits in terms of how much CPU I can use. For example, I want to run PhotoStructure for all my photos, but it needs to analyze all the files initially. I limit Plex to maximum 50% of one CPU, but limiting things like PhotoStructure would make them way slower. I've had these for a few years. I got them when I had an apartment with no space for a NAS, expensive power, and unreliable Comcast internet. Times change... Now I've got a house with space for home servers, solar panels so running a server is "free", and 10Gbps symmetric internet thanks to [a local ISP, Sonic](https://www.sonic.com/). Currently, at home I've got one server: A [HP ProDesk SFF PC](https://support.hp.com/us-en/document/c06388056) with a Core i5-9500, 32GB RAM, 1TB NVMe, and a single 14TB WD Purple Pro drive. It records my security cameras (using Blue Iris) and runs home automation stuff (Home Assistant, etc). It pulls around 41 watts with its regular load: 3 VMs, ~12% CPU usage, constant ~34Mbps traffic from the security cameras, all being written to disk. So, I want to move a lot of these files from the 10TB VPS into my house. 10TB is a good amount of space for me, maybe in RAID5 or whatever is recommended instead these days. I'd keep the 10TB VPS for offsite backups and camera alerts, and cancel the other two. Trying to work out the best approach: 1. **Buy a NAS**. Something like a QNAP TS-464 or Synology DS923+. Ideally 10GbE since my network and internet connection are both 10Gbps. 2. **Replace my current server with a bigger one**. I'm happy with my current one; all I really need is something with more hard drive bays. The SFF PC only has a single drive bay, its motherboard only has a single 6Gbps SATA port, and the only PCIe slots are taken by a 10Gbps network adapter and a Google Coral TPU. 3. **Build a NAS PC and use it alongside my current server**. TrueNAS seems interesting now that they have a Linux version (TrueNAS Scale). Unraid looks nice too. Any thoughts? I'm leaning towards option 2 since it'll use less space and power compared to having two separate systems, but maybe I should keep security camera stuff separate? Not sure.
fedilink

I couldn't find a "Home Networking" community, so this seemed like the best place to post :) My house has this small closet in the hallway and thought it'd make a perfect place to put networking equipment. I got an electrician to install power outlets in it, ran some CAT6 myself (through the wall, down into the crawlspace, to several rooms), and now I finally have a proper networking setup that isn't just cables running across the floor. The rack is a basic StarTech two-post rack ([https://www.amazon.com/gp/product/B001U14MO8/](https://www.amazon.com/gp/product/B001U14MO8/)) and the shelving unit is an AmazonBasics one that ended up perfectly fitting the space ([https://www.amazon.com/gp/product/B09W2X5Y8F/](https://www.amazon.com/gp/product/B09W2X5Y8F/)). In the rack, from top to bottom (prices in US dollars): * TP-Link ER8411 10Gbps router. My main complaint about it is that the eight 'RJ45' ports are all Gigabit, and there's only two 10Gbps ports (one SFP+ for WAN, and one SFP+ for LAN). It can definitely reach 10Gbps NAT throughput though. $350 * Wiitek SFP+ to RJ45 module for connecting Sonic's ONT (which only has an RJ45 port), and 10Gtek SFP+ DAC cable to connect router to switch. * MikroTik CRS312-4C+8XG-RM managed switch (runs RouterOS). 12 x 10Gbps ports. I bought it online from Europe, so it ended up being \~$520 all-in, including shipping. * Cable Matters 24-port keystone patch panel. * TP-Link TL-SG1218MPE 16-port Gigabit PoE switch. 250 W PoE power budget. Used for security cameras - three cameras installed so far. * Tripp Lite 14 outlet PDU. Other stuff: * AdTran 622v ONT provided by my internet provider (Sonic), mounted to the wall. * HP ProDesk 600 G5 SFF PC with Core i5-9500. Using it for a home server running Home Assistant, Blue Iris, Node-RED, Zigbee2MQTT, and a few other things. Bought it off eBay for $200. * Sonoff Zigbee dongle plugged in to the front USB port * (next to the PC) Raspberry Pi 4B with SATA SSD plugged in to it. Not doing anything at the moment, as I migrated everything to the PC. * (not pictured) Wireless access point is just a basic Netgear one I bought from Costco a few years ago. It's sitting on the top shelf. I'm going to replace it with a TP-Link Omada ceiling-mounted one once their wifi 7 access points have been released. Speed test: [https://www.speedtest.net/my-result/d/3740ce8b-bba5-486f-9aad-beb187bd1cdc](https://www.speedtest.net/my-result/d/3740ce8b-bba5-486f-9aad-beb187bd1cdc) Edit: Sorry, I don't know why the image is rotated :/ The file looks fine on my computer.
fedilink