• 2 Posts
  • 44 Comments
Joined 1Y ago
cake
Cake day: Jul 01, 2023

help-circle
rss

I didn’t really mention immich directly here.

This is a problem which is endemic to casual software development like many FOSS projects. It’s a reality of how free software tends to be built in general vs commercial software.


The issue here is that these are solvable problems, release compat isn’t a new problem. It’s just a problem that takes dedicated effort to solve for, just like any other feature.

This is something FOSS apps tend to lack simply due to the nature of how contributions tend to work for free software. Which is an unfortunate reality, but a reality none the less.


People really underestimate the value of stability and predictability.

There are some amazing FOSS projects out there ran by folks who don’t give a crap about stability or the art of user experience. It holds them back, and unfortunately helps drive a fragmented ecosystem where we get 2,3,5 major projects all trying to do the same thing.


Because the majority of my traffic and services are internal with internal DNS? And I want valid HTTPS certs for them, without exposing my IP in the DNS for those A records.

If I don’t care about leaking my IP in my a records then this is pretty easy. However I don’t want to do this for various reasons. One of those being that I engage in security related activities and have no desire to put myself at risk by leaking.

Even services that I exposed to the internet I still don’t want to have my local network traffic go to the internet and back when there is no need for that. SSL termination at my own internal proxy solves that problem.

I now have this working by using the cloudflare DNS ACME challenge. Those services which I exposed to the internet cloudflare is providing https termination for, cloudflare is then communicating with my proxy which also provides https termination. My internal communication with those services is terminated at my proxy.



I stated in the OP that cloudflair HTTPS is off :/

I’m not using cloudflare for the certificate. I also can’t use the cloud for certificate anyways for internal services through a loopback.

Similarly you can have SSL termination at multiple layers. That’s works I have services that proxy through multiple SSL terminations. The issue that I’m having is that the ACME challenge seems to be having issues, these issues are documented and explained in various GitHub threads, however the set of solutions are seemingly different and convoluted for different environments.

This is why I’m asking this question here after having done a reasonable amount of research and trial and error.


I am doing SSL termination at the handoff which is the caddy proxy. My internal servers have their SSL terminated at caddy, my traffic does not go to the internet… It loops back from my router to my internal Network.

However DNS still needs to have subdomains in order to get those certificates, this cloudflair DNS. I do not want my IP to be associated with the subdomains, thus exposing it, therefore cloudflair proxy.

You’re seeing the errors because the proxy backend is being told to speak HTTPS with Caddy, and it doesn’t work like that.

You can have SSL termination at multiple points. Cloudflare can do SSL termination and Cloudflair can also connect to your proxy which also has SSL termination. This is allowed, this works, I have services that do this already. You can have SSL termination at every hop if you want, with different certificates.

That said, I have cloudflair SSL off, as stated in the OP. Cloudflare is not providing a cert, nor is it trying to communicate with my proxy via HTTPS.

Contrary to your statement about this not working that way, cloudflair has no issues proxying to my proxy where I already have valid certs. Or even self signed ones, or even no certs. The only thing that doesn’t work is the ACME challenge…


Edit: I have now solved this by using Cloudflair DNS ACME challenge. Cloudflair SSL turned back on. Everything works as expected now, I can have external clients terminate SSL at cloudflair, cloudflair communicate with my proxy through HTTPS, and have internal clients terminate SSL at caddy.


I cannot seem to figure out how to get caddy automatic HTTPS to work behind cloud flair proxy.
Hopefully you all can help! I've been to _hundreds_ of threads over the last few days trying to puzzle this out, with no luck. **The problem:** 1. Caddy v2 with acme HTTP-1 ACME challenge (Changed from TLS-ALPN challenge) 2. Cloudflair DNS with proxy ON 3. All cloudflair https is off 4. This is a .co domain Any attempt to get certificates fails with an invalid challenge response. If I try and navigate (or curl) to the challenge directly I always get SSL validation errors as if all the requests are trying to upgrade to HTTPS. I'm kind of at my wit's end here and am running out of things to try. If I turn Cloud flare proxy off and go back to TLS-ALPN challenge, everything works as expected. However I do not wish to expose myself directly and want to use the proxy. What should I be doing? __________ **I have now solved this by using Cloudflair DNS ACME challenge**. Cloudflair SSL turned back on. Everything works as expected now, I can have external clients terminate SSL at cloudflair, cloudflair communicate with my proxy through HTTPS, and have internal clients terminate SSL at caddy.
fedilink

The comment two above this links to a tool that literally does live syncing on a line by line level. Unless you’re editing the same lines at the same time you’re not going to get sync conflicts.

I use it as well and it works wonderfully in real time.


I very specifically want an app that collates all the information that can possibly be gathered about me in a way that I can utilize and abuse it myself. For me there is a lot of utility and value to be found with this sort of thing.

Of course the security posture of said app needs to be rather robust. And instead of it being an app it should instead be an SDK that I can then choose and control my own storage medium for.




Another risk with Monitor, which may get better with time. Is that FOSS rust projects have a tendency to slow down or even stall due to the time cost of writing features, and the very small dev community available to pick up slack when original creators/maintainers drop off, burn out, or get too busy with life.

To be clear: I have nothing against rust. It’s a fantastic language filling in a crucial gap that’s existed for decades. However, it’s I’ll suited for app development, that’s just not it’s strength.


Why are you here if you’re just going to insult hobbyists in the community dedicated to hobbyists.

This isn’t the kind of vibe /c/selfhosted needs


Ok…

So your point is that a bad logging implementation is bad. And I agree.

I’m not seeing how that’s extendable to implementations as a whole. You’re conflating your bad experience with "log aggregation is bad’.

Just because your company sucks at this doesn’t mean everyone else’s does.


Yeah, ofc it is.

I’m working in a system that generates 750 MILLION non-debug log messages a day (And this isn’t even as many as others).

Good luck grepping that, or making heads or tails of what you need.

We put a lot of work into making the process of digging through logs easier. The absolute minimum we can do it dump it into elastic so it’s available in Kibana.

Similarly, in a K8 env you need to get logs off of your pods, ASAP, because pods are transient, disposable. There is no guarantee that a particular pod will live long enough to have introspectable logs on that particular instance (of course there is some log aggregation available in your environment that you could grep. But they actually usefulness of it is questionable especially if you don’t know what you need to grep for).

These are dozens, hundreds, more problems that crop up as you scale the number of systems and people working on those systems.



Yeah I had literally no idea what you were talking about until you mentioned the actual name in the comments.

NPM almost universally refers to node package manager in any developer or development adjacent conversation in my experience. Given that both the site, the command, the logo, and the binaries are “npm” makes that more appropriate.

Nginix proxy manager is far to niche to be referred to universally by acronym when it’s only ever used as an acronym when the context for it’s usage has already been defined (ie. In it’s documentation).

This becomes much more clear when you Google the acronym.


It is, but also it’s worrisome since it means support is harder, which means risk of abandonment is higher and community contributions lower. Which means “buying in” is riskier for the time investment.

Not really criticizing, 10/10 points on making something and then putting it out there, nothing wrong with that. Just being a user who’s seen too many projects become stale or abandoned, and have noticed that the trend has some correlation to the technology choices those projects made.


I’ve been looking a platform for personal blog, portfolio, and what not that’s kind of fun to play with without having to build the whole thing myself.

What’s your opinion of this project?


As of today I’m actually in a lucky position where I am now able to set up a secondary NAS at my brother in laws and use that as a backup server that I can back up to essentially in real time.

All it’ll cost me is the hardware and the electricity.


Yes.

I’m sure one can reasonably infer that I do not mean 30 meters.

Conveniently at highway speeds 30 minutes and 30 miles away are essentially equal.

I’ll try and use appropriate notation next time


I might be crazy but I have a 20TB WD Red Pro in a padded, water proof, locking, case that I take a full backup on and then drive it over to a family members 30m away once a month or so.

It’s a full encrypted backup of all my important stuff in a relatively different geographic location.

All of my VM data backs up hourly to my NAS as well. Which then gets backed up onto the large drive monthly.

Monthly granularity isn’t that good to be fair but it’s better than nothing. I should probably back up the more important rapidly changing stuff online daily.


The error posted in the app is from the website itself. It’s likely that the password manager is injecting something into the page which is causing errors.

There are many ways for this to go wrong, it has nothing to do with the web service itself.


We’re on Lemmy are they afraid of being censored because they are writing software catered for NSFW uses?

Other social media’s chilling effects are pretty deeply engrained unfortunately…


And there’s also the live sync extension which allows you to have live document syncs in real time via your own self-hosted CouchDB instance


PHP for sure can have a negative effect depending on how they are handling their data access through.

The application code itself running on PHP probably isn’t a problem but the influence that PHP may have over your data access patterns can be a source of significant performance problems.



Yeah, I use firstname@thelastnames.co

And EVERY DAMN PERSON corrects .co to .com

Unfortunately the .com.and .net are both used.


It’s not necessarily a moving target when entire blocks can be associated with Google.


Depends on how you’re using it. You can wrong an absolutely insane amount of performance out of postgres that you cannot with MySQL.

I wonder how much next cloud leaves on the table?


Sounds like a common software issue. All the features where developed to 80%, and then moved on to the next feature. Leaving that last, difficult, time consuming, 20% open and unfinished.

It’s the difference between more corporate or Enterprise projects and FOSS projects in a lot of ways. Even once that project matures and becomes a more corporate product the same attitude towards completeness and correctness tends to persist.

(not saying foss is bad, just that the bar tends to be lower in my experience of building software, for many legitimate reasons).

It’s “cultural” in a way depending on the project.


And Android TV, it’s gotten better, but generally still sucks.

I use Jellyfin because it’s FOSS, private, and it’s also written in a tech stack I’m very familiar with.not because it’s better than flex, because it really isn’t.


That’s my issue with Prometheus… I want to have solid monitoring and metrics, but there’s so much setup and I feel like I’m just hosing it all up.




And here I am running NFS as the backing storage on an R720xd for 4 other M630 VM hosts.

Connected via SFP+ DAC. I get max bandwidth saturation, and ~65k IOPS!! NFS is great 😅

You could use things like AWS S3 or similar offerings from other providers like digitalocean. They have plenty of documentation that guide you through how they work.


So, essentially, really poorly written malware? Given the number of assumptions it makes without any sort of robustness around system configuration it’s about as good as any first-pass bash script.

It’d be a stretch to call it malware, it’s probably an outright fabrication to call it a virus.


You say this and are downvoted.

While we are coming off the tail of Def Con where there where a plethora or small talks and live examples of taking advantage and abusing just this.


Seconding obsidian.

And you can self host the live sync plugin to keep all devices in sync with each other.


Second obsidian. And if you want to self-host a sync for it you can.

There’s a selfhosted sync plugin that lets you sync changes between many devices with a couchDB handling it all.

It works pretty smooth, and keeps my computers and my phone in sync as long as I’m on the LAN or VPN.


Rarbg Database Dump DMCA’d off GitHub
A good example of why GitHub and similar sites/services are not reliable or good places to publicize this sort of data. It seems kind of dubious that the DB could be DMCA'd for containing copyrighted videos, when it actually doesn't 🤔
fedilink