Basically, yes. You can configure most cron programs to mail task output to you (it’s usually done by setting the MAILTO
variable in the crontab, provided sendmail is available on your system).
I use that to do things like:
0 9 11 10 * echo 'lunch with John Doe at 12:20'
It sends me a mail, and I can see the upcoming events with crontab -l
. If it’s not a recurring event, I then delete the rule.
My favorite cost cutting tip is to avoid big webapps running on docker, and instead do with small UNIX utilities (cron instead of a calendar, text files instead of note taking app, rsync instead of a filehosting dropbox-like app, simple static webserver for file sharing, etc). This allows me to run my server on a simple Raspberry Pi, with less than 500mb of used RAM in average, and mininal energy consumption. So, total cost of the setup:
With that, I run all services I need on a single machine, and I have a backup plan for recovery of both hardware and software.
Getting used to a UNIX shell and to UNIX philosophy can take some time, but it’s very rewarding in making everything more simple (thus more efficient).
I’m using a pi4 8gb as my server, with a pi4 2gb as backup in case the first one dies. It’s a very classic server, running postfix/courier-imap for mails, lighttpd for web, bind9 for dns, ergo for irc, sqlite3 for databases. I also use fail2ban for IDS and cron to run tons of various task. All of that is hosted on a Gentoo linux OS.
The one thing I don’t want to use is docker. I love docker for development or for deploying the main app at work, but it makes managing updates a nightmare for handling multiple services on my server (most your containers probably contain vulnerable software due to lack of system updates), and it eats resources needlessly. Then again, it’s made possible because I avoid the big webapps that usually need it.
At the very least, it means the CEO doesn’t understand the domain. It may be because he sees this part of the business as secondary and less important, or because it was developed so fast he didn’t have time to grasp the concepts, probably he was not a driving force in that effort. I certainly hope the tech side is more aware. Without more proof of CEO implication, I certainly would not bet on that horse to survive in the distant future, though.
“Git hosting” would be more appropriate. Unless that by frontend, you mean specifically web frontend, but that would be weird, because forges also provide the web backend part.
Sourceforge was the biggest FOSS host in the 2000s, before GitHub (mainly because there was not much centralization to begin with). That train is long gone. :) Sure, the name and website Sourceforge still exist. Myspace, Digg and Yahoo do too. They are basically web ghosts, only an echo of what they once were.
Actually, I do use git bare repos for CD too. :) The ROOT/hooks/post-update
executable can be anything, which allows to go wild : on my laptop, a push to a bare repos triggers deploy to all the machines needing it (on local or remote networks), by pushing through ssh to other bare repos hosted there, which builds and installs locally, given they all have their own post-update scripts ; all of that thanks to a git push and scripts at the proper paths. I don’t think any forge could do it more conveniently.
For me the main interest of forges is to publish my code and get it discovered (before GitHub, getting people to find your repos hosted on your blog’s server was a nightmare). Even for the collaboration, I could do with emails. That being said, most people aren’t on top of their inbox, in which mails from family are mixed with work mails and commercial spam in one giant pile of unread items, so it’s a good thing for them we have those issue trackers.
That’s the name we use to designate software like GitHub, GitLab and similar, which provide repositories hosting and tooling like issue trackers. It’s supposed to be named like that because of SourceForge, the oldest of such tools, although I didn’t hear the term “forge” before the last 5 years or so, long after SourceForge demise, so I imagine there is a bit of nostalgia in this name (not sure who is nostalgic of SourceForge, though 😂). The wikipedia page : https://en.wikipedia.org/wiki/Forge_(software)
There hasn’t been a new Git repo launch in almost a decade
Am I the only person annoyed they seem to mistake repositories for forges? It’s already annoying when casual users say “git” for “GitHub”, but those guys actually want to build a forge, explaining they’re going to do better than anyone else. Maybe start by properly using the terms?
Yep, as often, the extension of the standard comes from non standard features developed here are there (as you can see in the participating organizations block, most of the big names are working on this). The difference in ircv3 is that you can expect to see all those features everywhere, instead of having this software implementing this feature, that other one having that other feature, and you have to choose which one is the most important for you. Basically, it’s a rebase of the standard. :)
They do maintain the simplicity of the line oriented protocol, so I’m fine with that. :)
That’s the strongest point of IRC, IMO, and why it’s kept so simple : every instruction is a plain text line, period. It makes it incredibly simple to build on top of it. You don’t need to introduce a dependency to a project that probably will be abandonned in a few years, at which point you’ll have to rewrite your codebase to use an other dependency, for a few years. You just open a TCP connection, you read lines from the socket and write lines to it, each line is its own instruction structured in well known fields, and that’s it. It’s so simple!
As long as IRCv3 sticks to that, they have my blessing. :)
The good news is that with ircv3 being worked on, it may soon(ish) be quite dusted. :) It adds features like reply threads, history from when you weren’t connected, message editing and deletion, and more!
Take back the control on your data, that’s the whole point… :) Where are you regularly saving data? Those are the prime candidates. Look at self-hosted alternatives for those services. I know big webapps hosted in docker containers managed by kubernetes is all the rage around here, but you can often find Unix style equivalent for such services, the main advantage of putting it on a server being to be able to access it from multiple devices. But you do you, if you prefer hosting big webapps, that’s fine too. :)
I organize my crontab by having group of tasks (the programs, the holidays, the housecleaning, etc). And of those groups, the events (the non recurring tasks) come last. So I just list the crontab (crontab -l
) and the list of things to come print to the screen, that block being at the end of the file. It’s hard to do better than a text file to list things. :)
I don’t know if there is a program that lists like “what is coming this month” if you really want to filter out the rest, but it should be easy enough to write, given the format of cron rules:
crontab -l | grep '*' | awk '{print $4 "," $3 "," $2 "," $1 " " $0 }' | sort -n | grep -E "^$(date '+%-m')"
crontab -l
: list the crontabgrep '*'
: keeps only rules (removing blank lines and comments)awk […]
: print the whole line ($0
), prepend by the 4th field (the month), the 3rd (the day), the 2nd (the hour) and the 1st (the minutes)sort -n
: sort everything numerically, so that all tasks are now in their execution date order (I made awk seperate the fields with a ,
character so it keeps sorting numerically past the first number)date '+%-m
: prints the current month, not zero padded (thanks to the ‘-’)grep -E '^date'
: keep only lines which starts with the current month numberYou put that in a script (like ~/bin/upcoming_events
) and you’re done. And then, you can call it from cron every monday get what’s coming next mailed to you. :)
This could but refined further to display dates in a more friendly format. But as usual, Unix is your friend. :)
I’m going to pass for the crazy person around, but so be it : cron.
Cron can be easily configured to send mails (MAILTO
variable when using standard cron), provided sendmail
is available on the system. If a command called by cron outputs anything, it will send a mail with the content, which is useful by itself to warn when something goes wrong with a cron task, but also allows to do things like this:
0 9 28 9 * echo birthday John
It’s really easy to get used to the syntax, it’s just going from more precise to less precise, so it’s “minute, hour, day, month, *”. The last one can usually be ignored (it’s the day of the week, I must have used it twice in my life). So here, “0 9 28 9”, you read it backward and it gives : September, 28th, 9:00. Piece of cake when you get a bit of practice. And cron is everywhere, so no need to install anything. Although, since I run it on my laptop, I use fcron, which has a nice feature to run ASAP tasks which should have ran if the computer was not shut down. This way, I never miss an alert.
I use it for recurring notes (like birthday, paperwork, house cleaning tasks, holidays, etc), but also as reminders of specific dates when I expect a delivery, have a meeting, etc. For the most important messages, I make it use a script that will make a destkop notification (with notify-send) and have a voice read the message (with mimic). And of course, I also use it to actually launch programs. :)
I do have to say for the purpose of tinkering I love these bigger projects because you learn so much on the way. Now having read your answer I am even more exited to try it out :D
That’s awesome to hear! Welcome, and have fun! :)
I haven’t heard of most of your abbreviations/term till now
Oh, my apologies. Here is a definition list :
I guess slapping it on my local raspberry pi wouldn’t be enough no?
Oh no, that would be way not enough. :) Managing a mailserver is a sysadmin task by itself. While you don’t need to do much once it works (which often is a perk of sysadmin work, compensating for the fact that when it does not work, they may have to wake in the middle of the night to fix it), it’s notoriously difficult to get right : you have the configuration of the mailserver to get right first, so that you can send emails, but nobody else can and you don’t become a spam relay without knowing it. Then you have a lot of configuration to do to be able to retrieve your emails from your server, which uses other protocols that you must learn about. Then you have “optional” things that you must setup (SPF, DKIM and DMARC), which you won’t be able to send mails to gmail or outlook if you don’t set them up properly. And when you will have got all of that right, you will have enough experience to be hired as a sysadmin. :)
I can’t provide a good resource for learning it, I learned it 15 years ago when it was way more simple (before SPF and DKIM), and picked every addition as they appeared, but any course on how to manage a mail system will do. There is no difference in doing it for your self-hosted server and for a company (except maybe that for a company, they’ll make you handle users in a database, which you can forego for your own needs). I would recommend to learn how to use postfix first, then any imap server (courier-imap is a top runner), and when you’re comfortable with that, you can learn about SPF, then DKIM, then DMARC. But be aware before going through it that this is basically learning a new skill (sysadmin). You can find docker images that setup everything automatically for you, but I would recommend against that, because at some point, things will break and you will have no idea how to fix them. And if you try to fix them while not knowing well what you’re doing, that’s a good way to end up being a spam relay. Plus, those docker images are difficult to customize, which quite defeats the point of managing your own mail system to begin with.
Well I didn’t want google to read my mails
Sadly, it only works if no one in the recipients of the mail is on gmail (or if everyone use pgp, which I would tend to think is even more rare).
I host my own mailserver as well, and I would add as benefits:
username+something@host
). That also makes routing/filtering mails way more easy, you just have to match the recipient address.Oh, I see. Totally makes sense. :)
I guess it depends on the country, but here in France, yes, most landline ISPs provide static IPs (maybe all? there are a couple I haven’t try ; mobile IPs are always dynamic, though). It was not always the case, but I haven’t had a dynamic IP since the 2000’. I feel you, dealing with pointing a domain to a dynamic IP is a PITA.
Ahah, yeah, I protected myself against accidentally banning my own IPs. First, my server is a Pi at home, so I can just plug a keyboard and a screen to it in case of problem. But more importantly, as I do that blacklisting through fail2ban, I just whitelisted my IPs and those of my relatives (it’s the ignore_ip
variable in /etc/fail2ban/jail.conf
)., so we never get banned even if we trigger fail2ban rules (hopefully, grandma won’t try to bruteforce my ssh!). It allowed me to do an other cool stuff : I made a script ran through cron that parses logs for 404 and checks if they were generated by one of the IPs in that list, mailing me if it’s the case. That way, I’m made aware of legit 404 that I should fix in my applications.
Oh, ok, you whitelist IPs in your firewall. That certainly works, if a bit brutal. :) (then again, I blacklist everyone who is triggering a 404 on my webserver, maybe I’m not the one to speak about brutality :P ) You don’t even need a VPN, then, unless you travel frequently (or your ISP provides dynamic IP, I guess).
I’m not sure about the feasibility of this (my first thought would be that ssh on the host can be accessed directly by IP, unless maybe the VPN software creates its own network interface and sshd binds on it?), but this does not remove the need for frequent updates anyway, as openssh is not the only software that could have bugs : every software that opens a port should be protected as well, and you can’t hide your webserver on port 80 behind a VPN if you want it to be public. And it’s anyway a way more complicated setup than just doing updates weekly. :)
If you do not neglect updates, then by all mean, changing ports does not hurt. :) Sorry if I may have strong reaction on that, but I’ve seen way too many people in the past couple decades counting on such anecdotal measures and not doing the obvious. I’ve seen companies doing that. I’ve seen one changing ports, forcing us to use the company certificate to log in, and then not update their servers in 6 months. I’ve seen sysadmins who considered that rotating servers every year made it useless to update them, but employees should all use Jumpcloud “for security reasons”! Beware, though, mentioning port changing without saying it’s anecdotal and the most important thing is updates, because it will encourage such behaviors. I think the reason is because changing ports sounds cool and smart, while updates just sound boring.
That being said, port scanning is not just about targeted pentesting. You can’t just run nmap on a host anymore, because IDS (intrusion detection systems) will detect it, but nowadays automated pentesting tools do distributed port scanning to bypass them : instead of flooding a host to test all their ports, they test a range of hosts for the same port, then start over with a new port. It’s half-way classic port scanning and the “let’s just test the whole IP range for a single vulnerability” that we more commonly see nowadays. But they are way harder to detect, as they scan smaller sets of hosts, and there can be hours before the same host is tested twice.
The best you can do to know if it was an attack is to inspect the logs when you have time. There are a lot of things that can cause a process going wild without being an attack. Sometimes, even filling the RAM can cause the CPU to appear overloaded (and will freeze the system anyway). One simple way to figure out if it’s an attack : reboot. If it’s a bug, everything will get back to normal. If it’s a DDoS, the problem will reappear up to a few minutes after reboot. If it’s a simple DoS (someone exploiting a bug of a software to overload it), it will reappear or not given if the exploit was automated and recurring, or was just a one-shot.
The fact that both your machines fell at the same time would tend to make think it’s an attack. On the other hand, it may just be a surge of activity on the network with VPSes with way not enough resources to handle it. Or it may even be a noisy neighbor problem (the other people sharing with you the real hardware on which your VPSes run who will orverload it).
However Port 22 should never be open to the outside world.
Wat. How do you connect with ssh, then? You can bind openssh to an other port, but the only thing it changes is that you have less noise in your logs. The real most important security measure is to make sure your softwares are always up to date, as old vulnerable software is the first cause of penetration (and yes, it’s better to deactivate password login to only use ssh keys).
I’ve been running my own email server for years, and while it’s indeed difficult at first, it is possible and you don’t have much to do to maintain it when it works. All the horror stories you hear come from the fact it’s difficult to get right, and even when you get it right, you will have deliverability problems the first year, until your domain name gets established (and provided you don’t use it for spam, obviously - and yes, marketing is spam).
What you need :
.com
, .org
, .net
, etc. Don’t use one of those fancy new extensions (.shop
, .biz
, etc), they are associated with spammers.Start using that for a year without making it your main address. Best is to use it for things not too mainstream, like FOSS mailing lists, discussing with people having their own mailserver, etc, those will not drop your mails randomly. When a year has gone with frequent usage, you can migrate to that email address or domain.
Regarding the architecture of your network : do you read your emails on several machines (like, on mobile and laptop)? If not, you can dramatically simplify your design by using pop3 instead of imap, connecting your client to the AWS server, downloading all your emails to computer and removing them from the server at the same time. There, you have all your mails locally and you don’t need dovecot. :)
I don’t use a pihole, but I have a pi with my favorite distro acting as server, and I use dnsmasq for what you mention. It allows to set the machine as the nameserver for all your machines (just use its IP in your router DNS conf, DHCP will automatically point connected machines to it), and then you can just edit /etc/hosts
to add new names, and it will be picked up by the nameserver.
Note that dnsmasq itself does not resolve external names (eg when you want to connect on google.com), so it needs to be configured to relay those requests to an other nameserver. The easy way is to point it to your ISP nameservers or to public nameservers like those from Cloudflare and Google (I would really recommend against letting them know all domains you’re interested in), or you can go the slightly more difficult way as I did, and install an other nameserver (like bind9) that runs locally. Gladly, dnsmasq allowed to configure its relay nameserver to be on something else than port 53, which is quite rare in dns world. Of course, if you’re familiar with bind9, you could just declare new zones in it. I just find it (slightly 😂) more pleasant to work with /etc/hosts
.
It’s coming to Gitlab too! (although, this will take quite some time)
Obligatory check : are you sure you really need a forge? (that’s the name we use to designate tools like Github/Gitlab/Gitea/etc). You can do a lot with git alone : you can host repositories on your server, clone them through ssh (or even http with git http-backend
, although it requires a bit of setup), push, pull, create branches, create notes, etc. And the best of it : you can even have CI/CD scripts as post-receive
hooks that will run your tests, deploy your app, or reject the changes if something is not right.
The only thing you have to do is to create the repos on your server with the --bare
flag, as in git init --bare
, this will create a repos that is basically only what you usually have in the .git
directory, and will avoid having errors because you pushed to a branch that is not the currently one checked. It will also keep the repos clean, without artifacts (provided you run your build tasks elsewhere, obviously), so it will make all your sources really easy to backup.
And to discuss issues and changes, there is always email. :) There is also this, a code review tool that just pop up on HN.
And it works with Github! :) Just add a git remote to Github, and you can push to it or fetch from it. You can even setup hooks to sync with it. I publish my FOSS projects both on Github and Gitlab, and the only thing I do to propagate changes is to push to my local bare repos that I use for easy backups, they each have a post-update hook which propagates the change everywhere it needs to be (on Github, Gitlab, various machines in my local network, which then have their own post-update hooks to deploy the app/lib). The final touch to that : having this ~/git/
directory that contains all my bare repos (which are only a few hundred MB so fit perfectly in my backups) allowed me to create a git_grep_all
script to do code search in all my repos at once (who needs elasticsearch anyway :D ) :
#!/usr/bin/env bash
# grep recursively bare repos
INITIAL_DIR=$(pwd)
for dir in $(find . -name HEAD -exec dirname '{}' \;); do
pushd $dir > /dev/null
git grep "$*" HEAD > /dev/null
if [[ "$?" = "0" ]]; then
pwd
git grep "$*" HEAD
echo
fi
popd > /dev/null
done
(note that it uses pushd
and popd
, which are bash builtins, other shells should use other ways to change directories)
The reason why you may still want a forge is if you have non tech people who should be able to work on issues/epics/documentation/etc.
selfhosted ebook library
Is that what we call hard drives, now? :P
I have two android tablets, one 7" to read small books, and one 13" to read US Letter format books, I took the cheapest ones I found, disabled Google Play and installed F-Droid to install FOSS readers, and it just works perfectly. You really don’t need anything specific to just read text, you just want to make sure that you can display an entire page on your screen in a size you’re comfortable reading, otherwise PDFs becomes quickly insufferable.
Thanks for mentioning it, I didn’t know about it. Protecting against CVEs sounds indeed awesome. I took a more brutal approach to fix the constant pentesting : I ban everyone who triggers a 404. :D Of course, this only work because it’s a private server, only meant to be accessed by me and people with deep links. I’ve whitelisted IPs commonly used by my relatives, and I’ve made a log parser that warns me when those IPs trigger a 404, which let me know if there are legit ones, and is also a great way to find problems in my applications. But of course, this wouldn’t fly on a public server. :)
Note for others reading this, the correct link is CrowdSec
Gladly, fail2ban exists. :) Note that it’s not just smtp anyway. Anything on port 22 (ssh) or 80/443 (http/https) get constantly tested as well. I’ve actually set up fail2ban rules to ban anyone who is querying /
on my webserver, it catches of lot of those pests.
This. Also, anybody who can identify you as the owner of the host (be it through Whois or through hosting service records) can associate your name to everything posted on that instance, thus profiling you, your tastes and your opinions easily (it’s insane the amount of personal information we can leak on social media, even when thinking we’re not). Clearly not something to do in countries where you can be harassed or worse for your opinions, and probably best avoided everywhere, if privacy is a concern for you. There is some virtue in being immersed in the masses (that’s actually a common anonymisation strategy, from merging streams comes plausible deniability).
Ahah, so that’s what initrd are called on Pi. :P Good catch!
Funny enough, I don’t have such file on mine, the only *.img
I have is the kernel, kernel8.img
. I guess it’s OS specific.
The .bak
things sound like an interrupted update, or something. Like if the updater moved the current initrd as a backup file, then started building the new initrd and crashed or was rebooted before completion. That’s what I really dislike about automatic updates, I prefer to be sure to know it’s running, and see the output. :)
Congrats on sorting it out!
That’s the same thing. :) If you reduce computing load, you reduce the need for costly hardware and you reduce the need for energy, thus you reduce the amount of money needed to build and run your setup. There’s a saying in (software) engineering : “reducing energy consumption and increasing performances requires the same optimizations”. Make your code faster (by itself, not by buffing up hardware) and it consumes less energy. Make your application simpler, and it will run faster, and it will consume less energy. It’s not an absolute truth (it sometimes happen that you make your code faster and it consumes more energy), but it’s true most of the time.