Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

https://github.com/Atemu
https://reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

  • 3 Posts
  • 176 Comments
Joined 4Y ago
cake
Cake day: Jun 25, 2020

help-circle
rss

TS is a lot easier to set up than WG and does not require a publicly accessible IP address nor any public whatsoever. It’s not really comparable to setting WG up yourself; especially w.r.t. security.


It’s a central server (that you could actually self-host publicly if you wanted to) whose purpose it is to facilitate P2P connections between your devices.

If you were outside your home network and wanted to connect to your server from your laptop, both devices would be connected to the TS server independently. When attempting to send IP packets between the devices, the initiating device (i.e. your laptop) would establish a direct wireguard tunnel to the receiving device. This process is managed by the individual devices while the central TS service merely facilitates communication between the devices for the purpose of establishing this connection.


If you’re worried about that, I can recommend a service like Tailscale which does not require permanently open ports to the outside world, offering quite a bit more security than an exposed traditional VPN server.


Yes, yes they will. If you’re the sole user, they’d identify you from your behaviour anyways.

I don’t think internet proxy won’t help very much w.r.t. privacy but it will make you a lot more susceptible to being blocked.



Certainly better than the U.S. in that regard but I wouldn’t consider Germany “resilient” either.


Whether this is bad depends on your threat model. Additionally, you must also consider that other search engines are able to easily identify you without you explicitly identifying yourself. If you can’t fool https://abrahamjuliot.github.io/creepjs/, you certainly can’t fool Google for instance. And that’s even ignoring the immense identifying potential of user behaviour.

Billing supports OpenNode AFAICT which I guess you could funnel your Moneros through but meh.

Edit: Phrasing.



I personally have not found Kagi’s default search results to be all that impressive

At their worst, they’re as bad as Google’s. For me however, this is a great improvement over using bing/Google proxies which would be the alternative.

maybe if I took the time to customize, I might feel differently.

That’s the killer feature IMHO.


Your search results look very different to mine:

Did you disable Grouped Results?

All the LLM-generated “top 10” listicles are grouped into one large block I can safely ignore. (I could hide them entirely but the visual grouping allows for easy mental filtering, so I haven’t bothered.) Your weird top10 fake site does not show up.

But yes, as the linked article says, Kagi is primarily a proxy for Google with some extra on top. This is, unfortunately, a feature as Google’s index still reigns supreme for general purpose search. It absolutely is bad and getting worse but sadly still the best you can get. Using only non-Google indices would just result in bad search results.
The Google-ness is somewhat mitigated by Kagi-exclusive features such as the LLM garbage grouping.

What Google also cannot do is highlighted in my screenshot: You can customise filtering and ranking.
The first search result is a Reddit thread with some decent discussion because I configured Kagi to prefer Reddit search results. In the case of household appliances, this doesn’t do a whole lot as I have not researched trusted/untrusted sources in this field yet but it’s very noticeable in fields like programming where I have manually ranked sites.

Kagi is not “all about” privacy. It’s a factor, sure but ultimately you still have to trust a U.S. company. Better than “trusting” a known abuser (Google, M$) but without an external audit, I wouldn’t put too much wight into this.
The index ain’t it either as it’s mostly Google though sometimes a bit better.
What really sets it apart is the features. Customised ranking aswell as blocking some sites outright (bye bye pinterest and userbenchmark) are immensely useful. So are filtering garbage results that Google still likes to return.


That whole situation was such an overblown idiotic mess. Kagi has always used indices from companies that do far more unethical things than committing the extreme crime of having a CEO who has stupid opinions on human rights.
I 100% agree with Vlad’s response to this whole thing and anyone who thinks otherwise should question what exactly it is they’re criticising.

I don’t like Brave (super shady IMHO) and certainly not their CEO but I didn’t sign up for a 100% ethically correct search engine, I signed up for a search engine with innovative features and good search results. The only viable alternatives are to use 100% not ethically correct search indices with meh (Google) to bad (Bing, DDG) search results. If you’re going to tell me how Google and M$ are somehow ethical, I’m going to have to laugh at you.

The whole argument amounts to whining about the status quo and bashing the one company that tries anything to change it. The only way to get away from the Google monopoly is alternative indices. Yes those alternatives may not be much more ethical than friggin Google. So what.


I do like the idea of using USB drives for storage, though…

I wholeheartedly don’t.


They are quite solid but be aware that the web UI is dog slow and the menus weirdly designed.


Well that depends on how you define malware ;)


That is just a specific type of drive failure and only certain software RAID solutions are able to even detect corruption through the use of checksums. Typical “dumb” RAID will happily pass on corrupted data returned by the drives.

RAID only serves to prevent downtime due to drive failure. If your system has very high uptime requirements and a drive just dropping out must not affect the availability of your system, that’s where you use RAID.

If you want to preserve data however, there are much greater hazards than drive failure: Ransomware, user error, machine failure (PSU blows up), facility failure (basement flooded) are all similarly likely. RAID protects against exactly none of those.

Proper backups do provide at least decent mitigation against most of these hazards in addition to failure of any one drive.

If love your data, you make backups of it.

With a handful of modern drives (<~10) and a restore time of 1 week, you can expect storage uptime of >99.68%. If you don’t need more than that, you don’t need RAID. I’d also argue that if you do indeed need more than that, you probably also need higher uptime in other components than the drives through redundant computers at which point the benefit of RAID in any one of those redundant computers diminishes.


Without any cold hard data, this isn’t worth discussing.


The problem is that it’s not just 15W; I merely used that as an example of how even just two “low power” devices can cause an effect that you can measure in euros rather than cents.


Yes. Low power draws add up. 5W here 10W there and you’re already looking at >3€ per month.


You probably could. Though I don’t see the point in powering a home server over PoE.

A random SBC in the closet? WAP? Sure. Not a home server though.


If you’re using containers for everything anyways, the distro you use doesn’t much matter.

If Ubuntu works for you and switching away would mean significant effort, I see no reason to switch outside of curiosity.


The operating system is explicitly not virtualised with containers.

What you’ve described is closer to paravirtualisation where it’s still a separate operating system in the guest but the hardware doesn’t pretend to be physical anymore and is explicitly a software interface.


Do you have a media center and/or server already? It’s a bit overkill for the former but would be well suited as the latter with its dedicated GPU that your NAS might not have/you may not want to have in your NAS.


Glad I could save you some money :)


NixOS packages only work with NixOS system. They’re harder to setup than just copying a docker-compose file over and they do use container technology.

It’s interesting how none of that is true.

Nixpkgs work on practically any Linux kernel.

Whether NixOS modules are easier to set up and maintain than unsustainably copying docker-compose files is subjective.

Neither Nixpkgs nor NixOS use container technology for their core functionality.
NixOS has the nixos-container framework to optionally run NixOS inside of containerised environments (systemd-nspawn) but that’s rather niche actually. Nixpkgs does make use of bubblewrap for a small set of stubborn packages but it’s also not at all core to how it works.

Totally beside the point though; even if you don’t think NixOS is simpler, that still doesn’t mean containers are the only possible mean by which you could possibly achieve “easy” deployments.

Also without containers you don’t solve the biggest problems such as incompatible database versions between multiple services.

Ah, so you have indeed not even done the bare minimum of research into what Nix/NixOS are before you dismissed it. Nice going there.

as robust in terms of configurations

Docker compose is about the opposite of a robust configuration system.


This is a false dichotomy. Just because containers make it easy to ship software, doesn’t mean other means can’t be equally easy.

NixOS achieves a greater ease of deployment than docker-compose and the like without any containers involved for instance.


I would not buy a CPU without seeing a real-world measurement of idle total system power consumption if you’re concerned about energy (and therefore cost) efficiency in any way. Especially on desktop platforms where manufacturers historically do not care one bit about efficiency. You could easily spend many hundred € every year if it’s bad. I was not able to find any measurements for that specific CPU.

Be faster at transcoding video. This is primarily so I can use PhotoPrism for video clips. Real-time transcoding 4K 80mbps video down to something streamabke would be nice. Despite getting QuickSync to work on the Celeron, I can’t pull more than 20fps unless I drop the output to like 640x480.

That shouldn’t be the case. I’d look into getting this fixed properly before spending a ton of money for new hardware that you may not actually need. It smells like to me that encode or decode part aren’t actually being done in hardware here.

What codec and pixel format are the source files?
How quickly can you decode them? Try running ffmpeg manually with VAAPI decode, -c copy, and a null sink on the files in question.

What codec are you trying to transcode to? Apollo lake can’t encode HEVC 10 bit. Try encoding a testsrc (testsrc=duration=10:size=3840x2160:rate=30) to AVC 10 bit or HEVC 8 bit.


Have you considered using Oracle’s free VPS tier? Should be more than powerful enough to host a read-only Lemmy instance.

It’s not ideal but if you’re short on money, it’s better than having your online data rot.


Depends on how many other users are using the same proxy. If you host piped for yourself using your home internet connection, Google will absolutely know who is watching the video.


Yes, it’s called email. Run

git send-email

as Linus intended.


USB is not really a reliable connector for storage purposes. I’d highly recommend against USB.


A modem is a sort of “adapter” between physical mediums and protocols and sometimes also a router. It speaks DSL, fibre, cable etc. on one end and Ethernet on the other.

A wireless access point is similar in that is also is an “adapter” between mediums but it’s an adapter between physical and wireless. It effectively connects wireless devices to your physical Ethernet network (allowing communication in both directions) and never does any routing.

What you are typically provided by an ISP is an all-in one box that contains modem, router, switch, firewall, wireless access point, DHCP server, DNS resolver and more things in one device. For a home network, I wouldn’t want most of these to be separate devices either but at least wireless should be separate because the point of connection for the modem is likely not the location where you need the WiFi signal the most.


You’re looking for a wireless access point then, not a modem.


Nothing I host is internet-accessible. Everything is accessible to me via Tailscale though.


My setup already goes quite a bit beyond basic file hosting.

There is no self hosted service I could imagine to need that I’d expect not to be able to host due to CPU constraints. I think I’ll run into RAM constraints first; it’s already at 3GiB after boot.


That’s impressive.

Yeah, you really don’t need a lot of CPU power for selfhosting.

It’s a J4105, forgot to mention that.

What do you use the system for? And services like PiHole or media server?

Oh, sorry, forgot to add that bit.

It’s mainly a NAS housing my git-annex repos that I access via SSH.

I also host a few HTTP services on it:

https://github.com/Atemu/nixos-config/blob/ee2d85dc3665ae3cad463a3eb132f806651fe436/configs/SOTERIA/default.nix#L57-L75

The services I use most here are Paperless and Piped.

Mealie will be added to that list as soon as the upstream PR lands which might be later this evening.

My Immich module is almost ready to go but the Immich app has a major bug preventing me from using it properly, so that’s on hold for now.

I do want to set up Jellyfin in the not too distant future. The machine should handle that just fine with its iGPU as Intel’s Quicksync is quite good and I probably won’t even need transcoding for most cases either.

I probably won’t be able to get around setting up Nextcloud for much longer. I haven’t looked into it much but I already know it’s a beast. What I primarily want from it is calendar and contact synchronisation but I’d also like to have the ability to share files or documents with mere mortals such as my SO or family.
The NixOS module hopefully abstracts away most of the complexity here but still…


I use an Intel SBC with 10W TDP CPU in it. With a HDD and after PSU inefficiency, it draws about 10-20W depending on the load.


Infiltrate a movie studio I guess?

On a more serious note: There are some theoretical use-cases for this in a home lab setting if you “enhance” your video in some way server-side and want to send it to a client without loss.

What I had actually intended with the original question is to figure out what OP was actually doing.


A 90min raw 4K movie is well over 4TB in size and does not stream fine over 500Mb/s. Your 80GB “RAW” 4K movie is compressed lossily.


WDYM by “these”? I’m specifically talking about uncompressed (raw) video.

If configured, jellyfin will transcode videos for compatibility with the playback device.


If you take a look at my calculation, I’m assuming 24fps because this is a movie.


Actual: How to import data with proper readable payee?
cross-posted from: https://lemmy.ml/post/11150038 > I'm trying out Actual and have imported my bank's (Sparkasse) data for my checking account via CSV. In the CSV import, I obviously had to set the correct fields and was a bit confused because Actual only has the "Payee" field while my CSVs have IBAN, BIC and a free text name (i.e. "Employer GmbH".) > > IBAN is preferable because it's a unique ID while the free text name can be empty or possibly even change(?). (Don't know how that works.) > OTOH, the free text name is preferable because I (as a human) can use it to infer the actual payee while the IBANs are just a bunch of numbers. > > Is it possible to use IBAN aswell as the free text name or have a mapping between IBAN and a display name? > > How do you handle that?
fedilink


How do you encode your paper scans?
I assume many of you host a DMS such as Paperless and use it to organise the dead trees you still receive in the snail mail for some reason in the year of the lord 2023. How do you encode your scans? JPEG is pretty meh for text even at better quantisation levels ("dirty" artefacts everywhere) and PNGs are quite large. More modern formats don't go into a PDF, which means multiple pages aren't possible (at least not in Paperless). Discussion on GH: https://github.com/paperless-ngx/paperless-ngx/discussions/3756
fedilink