Admin: lemux

Issues and Updates: !server_news

Find me:

mastodon: @minnix@upallnight.minnix.dev

matrix: @minnix:minnix.dev

peertube: @minnix@nightshift.minnix.dev

funkwhale: @minnix@allnightlong.minnix.dev

writefreely: @minnix@tech.minnix.dev

  • 4 Posts
  • 70 Comments
Joined 1Y ago
cake
Cake day: Jun 22, 2023

help-circle
rss

I was asking them to post their setup so I can evaluate their experience with regards to Proxmox and disk usage.


There is no way to get acceptable IOPS out of HDDs within Proxmox. Your IO delay will be insane. You could at best stripe a ton of HDDs but even then one enterprise grade SSD will smoke it as far as performance goes. Post screenshots of your current Proxmox HDD/SSD disk setup with your ZFS pool, services, and IO delay and then we can talk. The difference that enterprise gives you is night and day.


Yes you don’t need Proxmox for what you’re doing.




Looking back at your original post, why are you using Proxmox to begin with for NAS storage??


For ZFS what you want is PLP and high DWPD/TBW. This is what Enterprise SSDs provide. Everything you’ve mentioned so far points to you not needing ZFS so there’s nothing to worry about.


Yes I’m specifically referring to your ZFS pool containing your VMs/LXCs. Enterprise SSDs for that. Get them on ebay. Just do a search on the Proxmox forums for enterprise vs consumer SSD to see the problem with consumer hardware for ZFS. For Proxmox itself you want something like an NVME with DRAM, specifically underprovisioned for an unused space buffer for the drive controller to use for wear leveling.


ZFS is great, but to take advantage of it’s positives you need the right drives, consumer drives get eaten alive as @scrubbles@poptalk.scrubbles.tech mentioned and your IO delay will be unbearable. I use Intel enterprise SSDs and have no issues.


I read long ago you had to get malware on the air gapped machine first to begin with, and then it’s only accessible within a few meters. Also it can’t be accessed through walls. That was years ago though, maybe it’s changed now.


If it’s the same then after installing docker, creating a vaultwarden user, adding said user to docker group, and creating your vaultwarden directories, all that’s left is to curl the install script and answer the questions it asks.


I use bitwarden and the setup was fairly standard with the helper script. I use my own isolated proxy for all my services so that was already built. I haven’t used vaultwarden but if anyone that has used both can tell me the differences I could maybe help out.


I would say that is not the best way to keep/restore backups as you are missing the integrity checking features of a true backup system. But honestly what really matters is how important the data is to you.


I did something similar when migrating to 8. Consumer SSDs suck with proxmox so I bought 2 enterprise SSDs on Ebay before the migration and decided to do everything at once. I didn’t have all the moving parts you did though. If you have an issue, you will more than likely not be able to pop back in the old SSDs and expect everything to work as normal. I’m not sure what you’re using to create backups but if you’re not already I would recommend PBS. This way if there is an issue, restoring your VMs is trivial. As long as that PBS is up and running correctly (makes sure to restore a backup before making any changes to make sure it works as intended) it should be ok. I have 2 PBS’s. One on and off site.

PBS will keep the correct IPs of your VMs so reconnecting NFS shares shouldn’t be an issue either.


I’ve ran jitsi for 4 years now. You can keep your personal variables in an environment file that doesn’t really change and pull down a new compose file whenever you want to update. Ever since the switch to docker from native install it has made things much easier to maintain. I’m using a lxc with debian 12. 4 cores and 4gb ram. The only reason I’ve allocated that many resources is because we use it to record a podcast with anywhere from 4 to 10 people on the server at a time. As far as bitrate, resolution, etc, that’s all handled within your env file. You’d have to look at the docs to see what’s available for you to choose from.


Before you buy anything, put some of the same content that buffers on a USB stick or powered drive and play it directly from the pi4. Also connect via ethernet to your router from another PC and check your dl speed from the NFS share.



CPU is only one factor regarding specs, a small one at that. What kind of t/s performance are you getting with a standard 13B model?


What are your laptop specs?


Ollama without a GPU is pretty useless unless you’re using with Apple silicon. I’d just get rid of it until you get a GPU.



I guess I don’t understand. You followed the docker installation directions correctly and it didn’t work or you modified the directions in a way that you prefer and it didn’t work?


I have it installed for a few years now. I started with the AIO but moved to the separate container install after AIO was deprecated. I imagine the install process is too complex for portainer. https://docs.funkwhale.audio/stable/administrator/installation/docker.html

I did steps 1-4 and skipped the rest because I already have a proxy server running. Don’t remember anything related to snapd though. Mine is running in a Debian 11 VM on proxmox instead of an LXC, but the process should be the same. Also they have a matrix channel for help https://matrix.to/#/#funkwhale-support:matrix.org

From what I remember it was relatively painless to install, but upgrading can be a chore, especially this last upgrade. My main interest in FW was the federation aspect as far as finding new music. If you don’t care about federation, maybe a simpler option would work better for you.


At the very least you need to install a webserver and you need a proxy of some kind. If you truly want old school you can just create html pages hosted from the root of your webserver (although there are now easier modern ways to do this, you might learn more the classic way rather than using a CMS).

You will want a reverse proxy to lie between your webserver and the internet that handles SSL. Let’s Encrypt is a good option to generate a cert so that you only expose port 443 on your router to the internet and your webserver. You’ll have to open port 80 to generate the cert but can close it again once generated. Then you will have https.

That’s the basics. The how-to’s are easy to find online.


I’m not sure how soon you need this, but if you can wait sipeed has a $20 kvm with ATX control that should be out soon https://lunar.computer/news/sipeed-announces-new-20-risc-v-kvm-device/


There’s an interesting book I read recently related to this called The Anxious Generation: how the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness. I’d recommend it.

As a counterpoint, EFF put out this article today: The Surgeon General’s Fear-Mongering, Unconstitutional Effort to Label Social Media


Just keep in mind that even with a jetson board you’ll need one of the higher memory configurations to have a non-frustrating stable diffusion experience. 32-64GB like the Orin and those aren’t cheap. The nanos just don’t cut it without severe optimizations and very long generate times.


Elaborate on why samba is bad when it comes to security? Like list a bunch of links like this or write a paragraph summarizing them like a chatbot?


NFS does symlinks but they have to be configured correctly.

Samba may have not given you issues in the past, but it also doesn’t give you any security.



It seems to be quite a lot for the server it’s hosted on though (which is not the snappiest).

I’m working on it



It’s extremely light to run, and very easy to install and upgrade. I ran one for just myself without open registrations. The only con is that the community (self-hostable) version doesn’t allow js due to “safety reasons” so in order to have something like comments for your blog you have to either perform several janky CSS hacks or adjust the source code yourself. The only reason I chose wf was because of federation, but I eventually switched to standard WordPress with the federation plugin and now have comments and whatever else I want.


I see your point but you can call out China’s Uyghur genocide while still being against forced labor in other parts of the world.


I started using Frigate and thought about going the Coral route but realized you didn’t need them if you have a relatively recent Intel CPU (6th gen or newer) as OpenVino with the iGPU is pretty much on par https://github.com/blakeblackshear/frigate/discussions/5742 .

A lot of the newer SBCs are being shipped with integrated NPUs/TPUs now as well. I would get a Coral if I were to use an older SBC or RPi or older PC as a camera server for object detection. Currently I have an ESP32-CAM watching a bird feeder but that feed goes to a modern server for bird species recognition but I could see a Coral as an option.


I like Librum for reading, as far as finding https://lemmy.ml/c/piracy may know


Traffic take longer? You’re talking about milliseconds. Also Wake On Lan has been baked into BIOS’s and Network cards for years so there’s no need to waste power. Is the issue that you just don’t have a PC? Regardless, if this is the path you want to take I think it’s a cool learning experience and I’m interested in seeing how it turns out.


But searxng is a search engine you access from a web browser, why aren’t you hosting it at home and accessing it from your phone via url?



I’m currently using it on v14, works fine.





This might be what you’re looking for https://lemux.minnix.dev/post/157074