I am an archo-communist, cat-loving dude with a very eclectic range of interests and passions. Currenty, I’m into networks of all kinds and open source software.

  • 1 Post
  • 59 Comments
Joined 1Y ago
cake
Cake day: Jul 08, 2023

help-circle
rss

Thanks, mate! I’ll take all I can get.


Yes, I did get the processor off of eBay for 75 bucks and the motherboard for 150. That I figured was a score and it shipped from Taiwan so it didn’t quite take forever to get here. The drives, case, RAM, and low-end GPU I got from the flea markets.


It’s a Proxmox server that’s well under subscribed and utilized. I currently use it as a remote back-up for my brother’s business computer and the family’s various machines. It has one Arch Linux VM for that purpose. Another Arch Linux VM has two docker containers for running Mastodon and Lemmy.

I want to do more with it but right now it’s time for me to buckle down and actually take some steps toward bettering my career because I am sick of being a senior Windows desktop support technician. I really want to do Linux/BSD systems administration or DevOps stuff. I hate feeling like I have to learn under the gun but, at this point, thinking about work on Monday is making me physically ill. The only relief will be knowing that there is an end to this tunnel.


A lot of the components I actually bought at computer swap meets/flea markets. Some vendors had corporate cast-off hard drives and all kinds of good deals.


Nice! I built my server with a brand new tower case and bunch of second hand components. It has a 2016-era Xeon E5-2690 with 128GB of ECC RAM and 12 TB of storage in a RAID z1 config. I built for less than 500 bucks.


Hey that’s a pretty good setup! It’s a lot better than mine! LOL.


You’ve got Pixelfed which is similar to Instagram.


Nextcloud works well for general files. Immich is the way to go specifically for photo storing. It’s got a whole lot of added features.


If your requirements don’t need federation, then look into nextcloud.



You need to actually piece together those few to come up with one cohesive working instance. I can share with you the docker-compose.yml file that worked for me, if that will help.

version: '3'
services:
  db:
    restart: always
    image: postgres:14-alpine
    shm_size: 256mb
    networks:
      - internal_network
    healthcheck:
      test: ['CMD', 'pg_isready', '-U', 'postgres']
    volumes:
      - ./postgres14:/var/lib/postgresql/data
    environment:
      - 'POSTGRES_HOST_AUTH_METHOD=trust'

  redis:
    restart: always
    image: redis:7-alpine
    networks:
      - internal_network
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
    volumes:
      - ./redis:/data

  # es:
  #   restart: always
  #   image: docker.elastic.co/elasticsearch/elasticsearch:7.17.4
  #   environment:
  #     - "ES_JAVA_OPTS=-Xms512m -Xmx512m -Des.enforce.bootstrap.checks=true"
  #     - "xpack.license.self_generated.type=basic"
  #     - "xpack.security.enabled=false"
  #     - "xpack.watcher.enabled=false"
  #     - "xpack.graph.enabled=false"
  #     - "xpack.ml.enabled=false"
  #     - "bootstrap.memory_lock=true"
  #     - "cluster.name=es-mastodon"
  #     - "discovery.type=single-node"
  #     - "thread_pool.write.queue_size=1000"
  #   networks:
  #      - external_network
  #      - internal_network
  #   healthcheck:
  #      test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
  #   volumes:
  #      - ./elasticsearch:/usr/share/elasticsearch/data
  #   ulimits:
  #     memlock:
  #       soft: -1
  #       hard: -1
  #     nofile:
  #       soft: 65536
  #       hard: 65536
  #   ports:
  #     - '127.0.0.1:9200:9200'

  web:
    #build: .
    #image: ghcr.io/mastodon/mastodon
    image: tootsuite/mastodon:latest
    restart: always
    env_file: .env.production
    command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
    networks:
      - external_network
      - internal_network
    healthcheck:
      # prettier-ignore
      test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:3000/health || exit 1']
    ports:
      - '127.0.0.1:3000:3000'
    depends_on:
      - db
      - redis
      # - es
    volumes:
      - ./public/system:/mastodon/public/system

  streaming:
    #build: .
    #image: ghcr.io/mastodon/mastodon
    image: tootsuite/mastodon:latest
    restart: always
    env_file: .env.production
    command: node ./streaming
    networks:
      - external_network
      - internal_network
    healthcheck:
      # prettier-ignore
      test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1']
    ports:
      - '127.0.0.1:4000:4000'
    depends_on:
      - db
      - redis

  sidekiq:
    #build: .
    #image: ghcr.io/mastodon/mastodon
    image: tootsuite/mastodon:latest
    restart: always
    env_file: .env.production
    command: bundle exec sidekiq
    depends_on:
      - db
      - redis
    networks:
      - external_network
      - internal_network
    volumes:
      - ./public/system:/mastodon/public/system
    healthcheck:
      test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]

  ## Uncomment to enable federation with tor instances along with adding the following ENV variables
  ## http_proxy=http://privoxy:8118
  ## ALLOW_ACCESS_TO_HIDDEN_SERVICE=true
  # tor:
  #   image: sirboops/tor
  #   networks:
  #      - external_network
  #      - internal_network
  #
  # privoxy:
  #   image: sirboops/privoxy
  #   volumes:
  #     - ./priv-config:/opt/config
  #   networks:
  #     - external_network
  #     - internal_network

networks:
  external_network:
  internal_network:
    internal: true

NGINX Proxy Manager makes things even easier! All you have to do is make certain that you have websockets enabled for the proxy settings to go to your Mastodon instance and don’t forward via SSL because NPM is your SSL termination point. On your Mastodon instance’s NGINX configuration, change the port to listen on port 80, comment out all of the SSL related options, and in the @proxy section change the proxy_set_header X-Forwarded-Proto $scheme; to proxy_set_header X-Forwarded-Proto https; This is just telling Mastodon a small lie so it thinks the traffic is encrypted. This is necessary to prevent a redirection loop which will break things.


It’s actually not hard to get Mastodon running behind an existing reverse proxy. It’s also not hard to run it in a docker container. I run mine in a docker container with no issues. When version 4.1.4 was released, I just ran a docker-compose pull, and voila, my instant was upgraded. I can share my configs with you if you want. What is your existing reverse proxy server?


That’s pretty awesome that you want to go down this route and you’ll certainly benefit from the experience. Are you actually building out your lab as training for your career?


Sure! Let me know how it goes. If you need to do something more complex for internal DNS records for more than just A records, then look at the unbound.conf man page for stub zones. If you need something even more flexible than stub zones, you can use Unbound as a full authoritative DNS server with auth-zones. As far as I know auth-zones can even do zone transfers AXFR style which is cool!


You’ve got the right community IMHO. This is something that I’ve never tackled but I could imagine that it would work. Just make certain you have solid backups just in case the worst should happen.


Here is a sample configuration that should work for you:

server:
        interface: 127.0.0.1
        interface: 192.168.1.1
        do-udp: yes
        do-tcp: yes
        do-not-query-localhost: no
        verbosity: 1
        log-queries: yes

        access-control: 0.0.0.0/0 refuse
        access-control-view: 127.0.0.0/8 example
        access-control-view: 192.168.1.0/24 example

        hide-identity: yes
        hide-version: yes
        tcp-upstream: yes

remote-control:
        control-enable: yes
        control-interface: /var/run/unbound.sock

view:
        name: "example"
        local-zone: "example.com." inform
        local-data: "example.com. IN A 192.168.1.2"
        local-data: "www IN CNAME example.com."
        local-data: "another.example.com. IN A 192.168.1.3"

forward-zone:
        name: "."
        forward-addr: 8.8.8.8
        forward-addr: 8.8.4.4

What makes the split-brain DNS work is if the request for resolution comes from the localhost or from inside your network, it will first go to the view section to see if there is any pertinent local data. So if you do a query from your home network, on say, example.com, it will return your internal IP address which in this case is 192.168.1.2


Instead of pfSense, I would really recommend OPNsense, originally a fork but now standing on its own. I like the fact that OPNsense tracks closer to the current FreeBSD release than pfSense.


I did this myself for all of 150 dollars. I bought an OptiPlex 7050 off of Amazon and added a dual intel network card. From there, I installed OPNsense. I have a DMZ, WAN, and LAN interface.


I just decided to go ahead and implement split-brain DNS this evening and it works perfectly. What are you using for your internal DNS server? If it is Unbound, the one that I am using, I can share my config with you. After implementing this, the speed of my services when being accessed from my internal network sped up by an order of magnitude. I shoulda done this earlier. 😆


Right now the internal DNS I use has a TLD of .lan but that’s pretty much for my personal convenience. I access my websites by their FQDN internally with no issue. So I am not sure what your tring to achieve. Mind elaborating?


Tailscale and ZeroTier offer the least barriers to entry. They can both be setup quickly and easily when compared to traditional VPN.


In effect, Cloudflare would give protection against DDoS attacks before requests would even hit your servers. That much said you can implement mitigations on the reverse proxy itself. One example would be fail2ban.

I’m sure there are additional steps that you can take. I’m not a fan of Cloudflare because their free offering has some caveats and violating these could be problematic. I have a cloud VPS with a WireGuard tunnel back to my server. I don’t have to do anything ugly like port forwarding. The cloud VPS runs NGINX as a reverse proxy. It’s a relatively simple and effective setup.


When I was evaluating a NAS, I ended up DIY because it was easier, less expensive, and had better specs than what Synology was offering. You can run TrueNAS.


IMHO, inkjet printers as a whole are a scam. That is, unless you buy a really high end one for professional photo printing.


Email self-hosting
As the title reads, I really want to begin hosting my own email server again. I'm sick of the poor quality of the service providers out there. Damnit all I want/need is a reliable IMAP/SMTP provider. I spent 3 hours getting off of Hostinger and on to Zoho. I just hope Zoho won't suck. It's great for now but we'll see. Is the prevailing advice still not to bother with self-hosting email?
fedilink

My vote will always side with the open source community so please take that with a grain of sand. I much prefer Jellyfin because of its status as an open source project.


Check out Bookstack. It’s a brilliant piece of software that gets out of your way and has a modern look and feel that your family members will like and appreciate. The challenge will be to get them to use it instead of going straight to you. They may find themselves inconvenienced by having to go to a self-help resource. That’s a them problem and not a you problem though.


Google reaps a more than simply healthy profit compared to what it outlays to help open source projects. Alphabet is not doing this out of altruism so it’s definitely profit motivated. Google’s cash outlay to benefit FOSS is mere peanuts when compared to the lucrative value it gets in return.



It’s definitely against the ToS for me ISP. I doubt they’ll ever really find out because the bandwidth I’m using isn’t as crazy as the other customers which regularly stream from Netflix, Hulu, etc. I can kind of hide in plain sight. Also, my internet connection is fiber to the home. I have the lowest tier of service at 300MiB up and down. So I guess I realize I’m coming from a situation of relative privilege. I wish most people had this capability.


Never underestimate greed is something I learned in my 46 years alive.


There would be some initial shock but we would quickly get over it. Personally, I would be delighted if Google were to do a complete epic fail and close down.


At least with spinners you often get enough advanced warning of impending doom via SMART. Critical stuff I am still keeping on spinning disks backed up to tape and a privacy-respecting cloud service.


The cheaper SSDs can be very failure-prone though. I actually use second-hand spinning disks in my server and I have a whole bunch of spares waiting in the wings. I am still backed up though.


I am in the sort of power hungry box category. The cost of NUCs have gone up in my area. I’ve found that the big box stuffed with second hand hardware to offer more value for my use case.


Yeah, fuck one drive! Microsoft can eat my entire ass.


It does not mean anything for me because I am not a Windows user. For Windows users it means subscription models and renting software. It could also mean eventually booting your computer into a desktop that is in the cloud. I hope to god that does not happen because it may make finding hardware that will run Linux and BSD that much harder.


Thus far I’ve had very little in the way of difficulty with my self-hosting. Just remember that backups become even more important.


I am actually thinking about going back to Cloudflare tunnels. The only reason that I am hesitant is that I do use a fair amount of bandwidth as I host a mastodon server as well as a lemmy one. I don’t want to be stuck with a huge bandwidth bill.


My server in my home costs me a lot less than 30/month to operate. Since this is a hobby for me I don’t assign a monetary value to the time I spend working on it. I built the server with second hand components that I got at a swap meet for less than 700 dollars. Now knock on wood things have been running smoothly and I do a lot with this server. It doesn’t just power Lemmy and Mastodon, but it also does my Jellyfin and NAS. It’s probably overspec’d for my needs but that means I can use it for a long while.