It is easier to think of the SSL termination in legs.
If, however, you want to directly expose your service without orange cloud (running a game server on the same subdomain for example), then you’d disable the orange cloud and do Let’s Encrypt or deploy your own certificate on your reverse proxy.
In the old days, it used to be a problem because everyone just connect their windows 98 desktop with all their services directly exposed to the internet because they’re using dial up internet without the concept of a gateway that prevents internet from accessing internal resources. Now days, you’re most likely behind your ISP router that doesn’t forward ports by default, and you’re only exposing the things you’d actually want to expose.
For things you’d actually want to expose, having a service on the default port is fine, and reduces the chances of other systems interacting with it failing because they’d expect it on the default port. Moving them to a different port is just security through obscurity, and honestly doesn’t add too much value. You can port scan the entire public IPv4 space fairly quickly fairly cheaply. In fact, it is most likely that it’s already been mapped:
https://www.shodan.io/host/<your-ip-here>
Keeping the service up-to-date regularly and applying best practices around it would be much more important and beneficial. For SSH, make sure you’re using key based authentication, and have password based authentication disabled; add fail2ban to automatically ban those trying to brute force. For Minecraft, online mode and white listed only unless you’re running a public one for everyone.
I’m not saying you’re wrong — I’ve even upvoted your earlier comments because I’m generally in agreement; you’re an instance admin judging by your handle, go and check the vote history yourself lol.
I’m saying people shouldn’t force their janky unproven solo solution on to someone else who doesn’t have their level of distrust, and would just rather trust the multibillion multinational corporation, when all they want is something that’s been working fine for them for all they care.
There’s always the add more of everything so something could fail without impacting the stability aspect, and that’s great for a corporation needing the redundancy; but it’s probably prudent to not forget there’s also the “I’m interested in learning” aspect, where people running a home server to play with software side of things.
You’re spot on in that we’d need to know what it is that OP would like to do with the system, but I’m getting the feeling that stability isn’t that high of a concern just yet.
Until the basement floods and the server goes offline for a few days; or botched upgrade that’s failing quietly; over zealous spam assassin configuration; etc etc
It sounded like they were trying to archive things from Gmail to their own server, so just cut the middleman jank out, and let the wife continue to use her Gmail as intended.
The answer depends on how you’re serving your content. Based on what you’ve described about your setup, your content is likely served over HTTP through the secured tunnel. The tunnel acts like an encrypted VPN, which allows unencrypted content to be sent securely over the wire. This means although your web server is serving unencrypted content, it gets encrypted before it goes to Cloudflare, so no one along the path could snoop on it.
I’m aware this is the selfhost community, but for a company of 20 engineers, it is probably best to use something commercial in the cloud.
Biggest pain point was for our ops guy, who constantly had to stay behind to perform upgrades and maintenance, as they couldn’t do it during business hours when the engineers are working. With a team of at least 20, scheduling downtimes could get increasingly more difficult.
It also adds an entire system to be audited by the auditors.
The selfhost vs buy commercial kind of bounces back and forth. For smaller teams, less than 5 to 10 engineers, it might be a fun endeavour; but from that point on, until you get to mega corp scale with dedicated ops department maintaining your entire infrastructure, it is probably more effective to just pay for a solution from a major vendor in the cloud instead.
ddclient paired with a supported provider.
I think it would be a good idea to take a step back and ask what is it that you’re trying to achieve.
Userbase, the service linked, is a backend as a service platform that offers you authentication and basic database that you can access via their api. You’d then code your own front end web app to interact with their service and store data there. You pay only per storage used by their storage tiers, which are frankly fairly fair priced. If that is something you’d need, that’s a good idea, but you’d be coding the front end yourself.
If you’re only looking for authentication with OAuth, and then coding your own API backend, then something like Authentik would be a nice self hosted authentication provider. Others that commonly gets mentioned but I’ve got limited/no experience with worlds new keycloak, or fusionauth. Managed services here would be your Auth0, Okta, etc.
If you’ve got a specific use case in mind, then it may be a good idea to say what service you’re thinking about, and the community may be able to suggest prebuilt solutions that good better and require less lift.
Strictly speaking, they’re leveraging free users to increase the number of domains they have under their DNS service. This gives them a larger end-user reach, as it in turn makes ISPs hit their DNS servers more frequently. The increased usage better positions them to lead peering agreement discussions with ISPs. More peering agreements leads to overall cheaper bandwidth for their CDN and faster responses, which they can use as a selling point for their enterprise clients. The benefits are pretty universal, so is actually a good thing for everyone all around… that is unless you’re trying to become a competitor and get your own peering agreement setup, as it’d be quite a bit harder for you to acquire customers at the same scale/pace.
I tend to recommend sticking with more reputable providers, even if it means a couple of dollars extra on a recurring basis. Way too many kiddie hosts popping up, trying to make a quick buck during spring break/summer and then fail to provide adequate services when it actually comes time to provide service.
It may also be a good idea to check LET/WHT before committing into paying longer than month-to-month term with a provider.
OP Currently has in their possession 2 drives.
OP has confirmed they’re 12TB each, and in total there is 19TB of data across the two drives.
Assuming there is only one partition, each one might look something like this:
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 12345678-9abc-def0-1234-56789abcdef0
Device Start End Sectors Size Type
/dev/sda1 2048 23437499966 23437497919 12.0T Linux filesystem
OP wants to buy a new drive (also 12TB) and make a RAID5 array without losing existing data. Kind of madness, but it is achievable. OP buys a new drive, and set it up as such:
Device Start End Sectors Size Type
/dev/sdc1 2048 3906252047 3906250000 2.0T Linux RAID
Unallocated space:
3906252048 23437500000 19531247953 10.0T
Then, OP must shrink the existing partition to something smaller, say 10TB for example, and then make use of the rest of the space as part of their RAID5 :
Device Start End Sectors Size Type
/dev/sda1 2048 19531250000 19531247953 10.0T Linux filesystem
/dev/sda2 19531250001 23437499999 3906250000 2.0T Linux RAID
Now with the 3x 2TB partitions, they can create their RAID5 initially:
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda2 /dev/sdb2 /dev/sdc1
Make ext4 partition on md0, copy 4TB of data (2TB from sda1 and 2TB from sdb1) into it, verify RAID5 working properly.
Once OP is happy with the data on md0, they can delete the copied data from sda1 and sdb1, shrink the filesystem there (resize2fs
), expand sda2 and sdb2, expand the sdc1, and resize the raid (mdadm --grow ...
)
Rinse and repeat, at the end of the process, they’d end up having all their data in the newly created md0
, which is a RAID5 volume spanning across all three disks.
Hope this is clear enough and that there is no more disconnect.
I’m afraid I don’t have an answer for that.
It is heavily dependent on drive speed and number of times you’d need to repeat. Each time you copy data into the RAID, the array would need to write the data plus figuring out the parity data; then, when you expand the array, the array would need to be rebuilt, which takes more time again.
My only tangentially relatable experience with something similar scale is with raid expansion for my RAID6 (so two parity here compared to one on yours) from 5x8TB using 20 out of 24TB to 8x8TB. These are shucked white label WD red equivalents, so 5k RPM 256Mb cache SATA drives. Since it was a direct expansion, I didn’t need to do multiple passes of shrinking and expanding etc., but the expansion itself I think took my server a couple of days to rebuild.
Someone else mentioned you could potentially move some data into the third drive and start with a larger initial chunk… I think that could help reduce the number of passes you’d need to do as well, may be worth considering.
They’re going for RAID5, not 6, so with the third drive these’s no additional requirement.
Say for example if they have 2x 12T drive with 10T used each (they mentioned they’ve got 20T of data currently). They can acquire a 3rd 12T drive, create a RAID5 volume with 3x 1TB, thereby giving them 2TB of space on the RAID volume. They can then copy 2TB of data into the RAID volume, 1TB from each of the existing, verify the copy worked as intended, delete from outside, shrink FS outside on each of the drives by 1TB, add the newly available 1TB into the RAID, rebuild the array, and rinse and repeat.
At the very end, there’d be no data left outside and the RAID volume can be expanded to the full capacity available… assuming the older drives don’t fail during this high stress maneuver.
Even if you could free up only 1GB on each of the drives, you could start the process with a RAID5 of 1GB per disk, migrate two TB of data into it, free up the 2GB in the old disks, to expand the RAID and rinse and repeat. It will take a very long time, and run a lot of risk due to increased stress on the old drives, but it is certainly something that’s theoretically achievable.
You aren’t wrong, but that’s also the point… It makes no difference if they’re securing a VPS or their own network. In fact, they’d need to secure both systems — and I’ve seen so many neglected VPS’s in my time… I’ll be the first to admit: myself included.
There are very valid reasons to need a tunnel; CGNAT, ISP level port blocking, network policies (ie campus dorm), etc etc etc. However, if you read the other replies, this doesn’t seem to be the case here, and OP doesn’t seem to even know why they’re hiding their IP. They just wanted to do it because of some loose notion that it may be nice since they’re opening up their port.
For someone in that situation, introducing a whole stack that punches through the firewall via an VPN or alike introduces way more risk than just securing down the gateway directly, and handle the other issues as they come up.
Say someone wants to take your service down, you’ve got 500Mbits line at home ISP, and 10Gbits on your VPS; they sends 1Gbits of traffic to your VPS, your VPS happily tries to forward 1Gbits, fully saturating your home ISP line. Now you’re knocked offline.
Say someone discovers the actual IP, dropping traffic from anything else other than the VPS doesn’t help if they just, again, flood your line with 500Mbits of traffic. The traffic still flows from the ISP to your gateway before they could be dropped.
Say someone wants to perform SQL injection on your website, there is no WAF in this stack to prevent that.
Say someone abuses a remote code execution bug from the application you’re hosting in order to create a reverse shell to get into your system, this complex stack introduced doesn’t protect that.
You’ve provided a comprehensive guide, and I don’t want to single you out for being helpful, but I must ask: What problem does this solve, and does OP actually have the problem this stack can solve? From the replies we’ve seen in this thread, OP doesn’t have sufficient understanding to the full scope of the situation. Prescribing a well intended solution might be helpful, but it gives a false sense of security that doesn’t really help with the full picture.
You do not strictly need to open a port – tunnelling through another server could be a solution, but let’s park this for a moment.
What you are describing as “open a port in my firewall” is actually many smaller parts, some key ones that may be relevant are:
All three things (amongst others that’s not immediately relevant here) must be properly setup for any network request to happen. What do I mean by that? I can have a port not drop traffic (i.e.: firewall down). When someone from outside of my network trying to access the port, they’d get to my router, but nothing happens because there’s no where for the packet to go. I can have my firewall down, and port forwarding enabled, but the web server isn’t running. When someone from outside of my network trying to access the port, they’d get to my router, get forwarded to my computer, but because the web server isn’t running, nothing happens. Someone from outside of my network can only gain access to my service (and only that service) only when all three are setup and working together.
“But what about the hackers?”
Yes, the untrusted networks, such as the internet, could be a bad place with people with bad intentions. There are many different things they could do to make things undesirable; let’s explore some of them together.
Say we want to run an instance of Lemmy using a new experimental server software (i.e.: not the official Lemmy server). Now, unfortunately, some racist people decided to come and make racist posts on our instance. A tunnel / proxy doesn’t solve this. Instead, we have to ban their accounts. It may not seem much, and it was completely innocuous to our system, but we’ve just dealt with our first attack.
One of those racist person happens to be the “scary hacker” type, so they came back and try to brute force our admin account’s password to unban themselves. This is not too bad, but we need to address this somehow. A tunnel / proxy doesn’t solve this; but something like Fail2Ban might be able to look at the login failures and put a temporary IP ban on the attacker.
They’re back! And this time, they decide to repeatedly hammer the search function, thereby taking all the resources from our database, so our instance cannot serve other users. A tunnel / proxy doesn’t solve this; but some rate limiting configurations in the server application might help.
They’re not happy about getting rate limited there. So this time, they decided to continuously post garbage to our instance, not even normal requests, just connect to our web server, and spam AAAAAAAAAAAAAA… non stop, at such a quick pace that it fully saturates our network connection, and we cannot do anything else on the network. A tunnel / proxy doesn’t solve this; we’d need to block them from the firewall. This is not entirely true; blocking them at the firewall doesn’t solve the problem, because the traffic still goes from the ISP to the firewall, which will still be saturated before the firewall could drop the traffic, but to use as an example it narrates a potential problem well enough.
They’re angry now, and they pay a few bucks to botnets to have many many many thousands of infected computers to spam AAAAAAAAA… non stop at our service. Again, a tunnel / proxy doesn’t solve this; we’d need to have something smarter than just our firewall and individually ban the IP addresses. This is where we’d need the professionals with typically commercial offerings.
It could escalade the other direction. Instead of attacking with aim to take the service down, they could do other damaging things. Say they found a problem with our server software. Instead of giving the /post/<postid>
a numeric id, they can do something fancy like /post/1 AND 1 ==1; UPDATE users SET banned = FALSE WHERE username = 'racist-user'
and unban themselves. A tunnel / proxy doesn’t solve this; but a Web Application Firewall (WAF) might.
Now it escalades more. Through a complex chain of intentionally malformed image uploaded to the instance, the image resizer attempting to resize the image, which gets tripped over by the malicious image, which causes a remote code execution, which they use to create a remote access trojan (RAT) shell so they can connect to our server and run commands. This is usually the “big bad” that most people are scared of… someone from outside of their network having access to their system and thus gains the ability to extract their documents or encrypt their photos etc. A tunnel / proxy doesn’t solve this; but a WAF or an anti-virus on the server itself might.
Through these albeit simplified but lengthy exploration, we see that none of these would actually be addressed by a tunnel / proxy. There are other possible attacks, and they’d require other solutions.
So, goes back to what I was saying earlier… it is important to know why you’re trying to do something. Blindly prescribing tunnel / proxy doesn’t actually solve the problem.
What kind of attacks, against what service?
DDoS? It’s cheaper to hire botnets to attack than to defend. You’d most likely still be knocked off even just by the amount of traffic that leaks through your proxy before the VM gets cut off at the data centre. Specifically: it is much more likely that data centres will give higher thresholds before null routing your VM than your residential ISP would be wiling to tolerate.
Brute force on shell? SQL injection? Remote shell execution? Deploying the extra layer will not protect you from these as your own proxy will not give you WAF.
It is always important to know why you’re doing something, before anyone can prescribe a solution.
What is your objective for ‘hide server IP’?
Privacy to disconnect your identity from the service? There is no solution to this. Full stop. Even with Tor, the state backed acronym entities will figure it out if you get on their radar.
If your objective is to keep your service online, you’re going to be hard pressed to find cost effective alternatives… Commercial solutions are expensive, like, “if you have to ask about the price, you can’t afford it” expensive.
Alternatively, you can try to roll your own by having many many proxy servers yourself… but if you’ve got a target on your back, you’ll never have enough instances; DDOS-as-a-Service is much cheaper than the amount of reverse proxies required to keep your service online.
There’s probably other use cases, but chances are, you’d still be hard pressed to find a solution that’s cost effective.
Locks can happen by registrar (I.e.: ninjala, cloudflare, namecheap etc.) or registry (I.e.: gen.xyz, identity digital, verisign, etc.).
Typically, registry locks cannot be resolved through your registrar, and the registrant may need to work with the registry to see about resolving the problem. This could be complicated with Whois privacy as you may not be considered the registrant of the domain.
In all cases, most registries do not take domain suspensions lightly, and generally tend to lock only on legal issues. Check your Whois record’s EPP status codes to get hints as to what may be happening.
So just because they don’t know technology like you do, they should be left behind the times instead of taking advantage of advancements? A bit elitist and gate keeping there, don’t you think?
Everyone have their own choices to make, and for most, they’ve already decided they’d rather benefit from advancements than care about what you care about.
And here’s the reason why layman should not: they’re much more likely to make that one wrong move and suffer irrecoverable data loss than some faceless corporation selling their data.
At the end of the day, those of us who are technical enough will take the risk and learn, but for vast majority of the people, it is and will continue to remain as a non starter for the foreseeable future.
If memory serves, the default docker compose expose the database port with a basic hard coded password, too. So imagine using the compose without reading too much, next thing you know you’re running a free Postgres database for the world.
Edit: yep, still publishing the db port with hard coded password…
No multi-region unless you roll it yourself. Their offerings are primarily web hosting centric, so you’d need to do the heavy lifting yourself if you want more infra. Also worth noting that they’re definitely not in the same league as the big players, they’re just an old vendor that isn’t likely to disappear on you.
There’s two ways around the symptoms you’re trying to treat:
Probably worth calling out that although 1 feels like there’s more hops (and there absolutely are), with any decent internet, you’re probably not going to feel it. This is because the edge server is probably situated very close to your ISP (that’s how they make sure everything responds quickly) so your over all round trip should only be affected by a negligible amount of time that you most likely won’t notice.
Amazing stuff. Thank you so much!