• 0 Posts
  • 72 Comments
Joined 1Y ago
cake
Cake day: Jul 14, 2023

help-circle
rss

The article you posted is from 2023 and PERA was basically dropped. However, this article talks about PREVAIL, which would prevent patents from being challenged except by the people who were sued by the patent-holder, and it’s still relevant.



Do you only experience the 5-10 second buffering issue on mobile? If not, then you might be able to fix the issue by tuning your NextCloud instance - upping the memory limit, disabling debug mode and dropping log level back to warn if you ever changed it, enabling memory caching, etc…

Check out https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html and https://docs.nextcloud.com/server/latest/admin_manual/installation/php_configuration.html#ini-values for docs on the above.


You could’ve scrolled down to the bottom, clicked on “Links,” then clicked on the repo link

The repo has instructions to install a Snap or build from source. If you build from source, it looks like you should download an archive from the releases page rather than just pulling from master.


Open-Webui published a docker image that has a bundled Ollama that you can use, too: ghcr.io/open-webui/open-webui:cuda. More info at https://docs.openwebui.com/getting-started/#installing-open-webui-with-bundled-ollama-support


I made a typo in my original question: I was afraid of taking the services offline, not online.

Gotcha, that makes more sense.

If you try to run the reverse proxy on the same server and port that an existing service is using (e.g., port 80), then you’ll run into issues. You could also run into conflicts with the ports the services themselves use. Likewise if you use the same outbound port from your router. But IME those issues will mostly stop the new services from starting - you’d have to stop the services or restart your machine for the new service to have a chance to grab the ports while they were unused. Otherwise I can’t think of any issues.


I’m afraid that when I install a reverse proxy, it’ll take my other stuff online and causes me various headaches that I’m not really in the headspace for at the moment.

If you don’t configure your other services in the reverse proxy then you have nothing to worry about. I don’t know of any proxy that auto discovers services and routes to them by default. (Traefik does something like this with Docker services, but they need Docker labels and to be on the same Docker network as Traefik, and you’re the one configuring both of those things.)

Are you running this on your local network? If so, then unless you forward a port to your server on the port your reverse proxy is serving from, it’ll only be accessible from the local network. This means you can either keep it that way (and VPN in to access it) or test it by connecting directly to your server on that port and confirm that it’s working as expected before forwarding the port.


Paired with allowing people who own the original to upgrade for $10 (and I’m assuming something similar in the UK) when they’re charging $50 for the remaster if you don’t have the original, that makes sense. They’re just closing a loophole.

I’d much rather they double the existing game’s price than for them to charge $25-$30 for the upgrade or to even just not have one outright.

It sucks for anyone who’d been planning to play the original and who just hadn’t bought it yet, but used prices for discs should still be low, so only the subset of those people who have disc-less machines are really impacted.


I don’t know that a newer drive cloner will necessarily be faster. Personally, if I’d successfully used the one I already have and wasn’t concerned about it having been damaged (mainly due to heat or moisture) then I would use it instead. If it might be damaged or had given me issues, I’d get a new one.

After replacing all of the drives there is something you’ll need to do to tell it to use their full capacity. From reading an answer to this post, it looks like what you’ll need to do is to select “Change RAID Mode,” then keep RAID 1 selected, keep the same disks, and then on the next screen move the slider to use the drives’ full capacities.


upper capacity

There may be an upper limit, but on Amazon there is a 72 TB version that would have to come with at least 18 TB drives. If 18 TB is fine, 20 TB is also probably fine, but I couldn’t find any reports by people saying they’d loaded 20 TB drives into theirs without issue.

procedure

You could also clone them yourself, but you’d want to put the NAS into read only mode or take it offline first.

I think cloning drives is generally faster than rebuilding them in RAID, as well as easier on the drives, but my personal experience with RAID is very limited.

Basically, what I’d do is:

  1. Take the NAS offline or make it read-only.
  2. Pull drive 0 from the array
  3. Clone it
  4. Replace drive 0 with your clone
  5. Pull drive 2 (from the other mirrored pair) from the array
  6. Clone it
  7. Replace drive 2 with your clone
  8. Clone drive 0 again, then replace drive 1 with your clone
  9. Clone drive 2 again, then replace drive 3 with your clone
  10. Put the NAS back online or make it read-write again.

In terms of timing… I have a Sabrent offline cloning hub (about $50 on Amazon), and it copies data at 60 Mbps, meaning it’d take about 9 hours per clone. Startech makes a similar device ($96 on Amazon, that allegedly clones data at 466 Mbps (28 GB per minute), meaning each clone would take 2.5 hours… but people report it being just as slow as the Sabrent.

Also, if you bought two offline cloning devices, you could do steps 1-3 and 4-6 simultaneously, and do the same again with steps 7-8.

I’m not sure how long it would take RAID to rebuild a pulled drive, but my understanding is that it’s going to be fastest with RAID 1. And if you don’t want to make the NAS read-only while you clone the drives, it’s probably your only option, anyway.


Which system(s) are you playing on?


Good to know! I saw that mentioned on some (apparently outdated) Comodo marketing copy as a benefit over LE


EV certs give you an extra green bar or something along those lines. If your customers care about it, then you have to. If they don’t - and they probably don’t - it’s a waste.


What exactly are you trusting a cert provider with and what are the security implications?

End users trust the cert provider. The cert provider has a process that they use to determine if they can trust you.

What attack vectors do you open yourself up to when trusting a certificate authority with your websites’ certificates?

You’re not really trusting them with your certificates. You don’t give them your private key or anything like that, and the certs are visible to anyone navigating to your website.

Your new vulnerabilities are basically limited to what you do for them - any changes you make to your domain’s DNS config, or anything you host, etc. - and depend on that introducing a vulnerability of its own. You also open a new phishing attack vector, where someone might contact you, posing as the certificate authority, and ask you to make a change that would introduce a vulnerability.

In what way could it benefit security and/or privacy to utilize a paid service?

For most use cases, as far as I know, it doesn’t.

LetsEncrypt doesn’t offer EV or OV certificates, which you may need for your use case. However, these are mostly relevant at the enterprise level. Maybe you have a storefront and want an EV cert?

LetsEncrypt also only offers community support, and if you set something up wrong you could be less secure.

Other CAs may offer services that enhance privacy and security, as well, like scanning your site to confirm your config is sound… but the core offering isn’t really going to be different (aside from LE having intentionally short renewal periods), and theoretically you could get those same services from a different vendor.



while (true) { print money; }

Someone’s never heard of Bitcoin


I use --format-sort +res:1080, which, if my understanding of the documentation is correct, will make it prefer 1080p, the smallest video larger than 1080p if 1080p isn’t available, or the largest video if nothing 1080p or larger is available.

res is the smallest dimension of the video (so for a 1080x1920 portrait video, it would be 1080).

Default sort is descending order. The + makes it sort in ascending order instead.


Does your script handle bi-directional sync or one-way only?


Doesn’t their API also require you to allow-list IPs, making it basically useless for dynamic DNS?

From https://www.namecheap.com/support/api/intro/ under “Whitelisting IP.”



Should I apologize for hurting your feelings by suggesting that Windows is bad?



Every single App Store out there uses “free” to refer to propriety software today, because it’s free.

“Free” as an adjective isn’t the issue. The issue is the phrase “free software” being used to refer to things other than free software. And afaict, no app store uses the term ”free software” to refer to non-free software.

The iOS App Store refers to “Free Apps.”

Google Play doesn’t call it “Free Software,” either; they just use it as a category / filter, e.g., “Top Free.”

There’s a reason many are … starting to refer to such software as “libre”, not “free”

Your conclusion is incorrect - this is because when used outside of the phrase “free software,” the word is ambiguous. “Software that is free” could mean gratis, libre, or both.


There is no path to any future where someone will be wrong to use the word “free” to describe software that doesn’t cost anything.

Setting aside that doing so is already misleading, you clearly lack imagination if you cannot think of any feasible way for that to happen.

For example, consider a future where use of the phrase when advertising your product could result in legal issues. That isn’t too far-fetched.

They don’t become invalidated. They’re not capable of becoming invalidated.

They certainly can. A given meaning of a word is invalidated if it is no longer acceptable to use it in a given context for that meaning. In a medical context, for example, words become obsolete and unacceptable to use.

Likewise, it isn’t valid to say that your Aunt Edna is “hysterical” because she has epilepsy.

But more importantly, that’s all beside the point. Words don’t just have meaning in isolation - context matters. Phrases can have meanings that are different than just the sum of their parts, and saying a phrase but meaning something different won’t communicate what you meant. If you say something that doesn’t communicate what you meant, then obviously, what you said is incorrect.

“Free software” has an established meaning (try Googling it or looking it up on Wikipedia), and if you use it to mean something different, people will likely misunderstand you and/or correct you. They’re not wrong in this situation - you are.

That, or you’re trying to live life like a character from Airplane!:

This woman has to be gotten to a hospital.

A hospital? What is it?

It’s a big building with patients, but that’s not important right now.


Thought you were talking about Linux at first.

I use both Windows, Linux, and macOS - my opinion is that Windows is the least user-friendly of the bunch.


Probably a client issue. I see a note that the user deleted their own comment.


it is literally impossible for it to ever not be objectively correct

And yet here you are, using “literally” to mean “figuratively.” Excuse me for not accepting your linguistic authority on the immutability of other words.


I’m not the person you replied to, I don’t use Photoshop, but I used to use GIMP exclusively and I use the Affinity suite now. What I’ve seen pop up in discussions about a major area where GIMP is lacking, going back several years at this point:

Photoshop supports nondestructive editing, and Affinity supports nondestructive RAW editing (and even outside RAW editing, it still supports things like filter layers). Heck, my understanding is Krita has support for nondestructive editing, too.

GIMP, on the other hand, has historically only had destructive editing. It looks like they finally added an initial implementation back in February. That’s great, and once GIMP 3.0 releases and that feature is fully supported, then GIMP will be a viable alternative for workflows that require it.


But being rude and abusive to support staff doesn’t help, encourage, or even compel the support staff do their jobs any better or faster. In fact, I’d wager it’s rather the opposite.

I work in IT (not IT support, though) and I’m fortunate enough that none of my business partners are outright abusive. Even so, I still have some that I deprioritize compared to others because working with them is a pain (things like asking for project proposals to solve X problem and never having money to fund them). If someone was actively rude to me when I had fucked up, much less when I was doing a great job, I can guarantee I wouldn’t work any better or faster when it was for them.


Reverse proxies aren’t DNS servers.

The DNS server will be configured to know that your domain, e.g., example.com or *.example.com, is a particular IP, and when someone navigates to that URL it tells them the IP, which they then send a request to.

The reverse proxy runs on that IP; it intercepts and analyzes the request. This can be as simple as transparently forwarding jellyfin.example.com to the specific IP (could even be an internal IP address on the same machine - I use Traefik to expose Docker network IPs that aren’t exposed at the host level) and port, but they can also inspect and rewrite headers and other request properties and they can have different logic depending on the various values.

Your router is likely handling the .local “domain” resolution and that’s what you’ll need to be concerned with when configuring AdGuard.



If your Java dev is using Jackson to serialize to JSON, they might not be very experienced with Jackson, or they might think that a Java object with a null field would serialize to JSON with that field omitted. And on another project that might have been true, because Jackson can be configured globally to omit null properties. They can also fix this issue with annotations at the class/field level, most likely @JsonInclude(Include.NON\_NULL).

More details: https://www.baeldung.com/jackson-ignore-null-fields


Third, a redirect is obvious

A redirect isn’t necessary if you control the DNS servers. If you control the DNS servers, you can MITM the website for any visitor because you can prove that you own the domain to a certificate authority and generate a new, trusted HTTPS cert. (Depending on specifics this may or may not foil the anti-phishing capabilities of Passkeys / U2F.)


They aren’t. From a comment on https://www.reddit.com/r/ublock/comments/32mos6/ublock_vs_ublock_origin/ by u/tehdang:

For people who have stumbled into this thread while googling “ublock vs origin”. Take a look at this link:

http://tuxdiary.com/2015/06/14/ublock-origin/

"Chris AlJoudi [current owner of uBlock] is under fire on Reddit due to several actions in recent past:

  • In a Wikipedia edit for uBlock, Chris removed all credits to Raymond [Hill, original author and owner of uBlock Origin] and added his name without any mention of the original author’s contribution.
  • Chris pledged a donation with overblown details on expenses like $25 per week for web hosting.
  • The activities of Chris since he took over the project are more business and advertisement oriented than development driven."

So I would recommend that you go with uBlock Origin and not uBlock. I hope this helps!

Edit: Also got this bit of information from here:

https://www.reddit.com/r/chrome/comments/32ory7/ublock_is_back_under_a_new_name/

TL;DR:

  • gorhill [Raymond Hill] got tired of dozens of “my facebook isnt working plz help” issues.
  • he handed the repository to chrismatic [Chris Aljioudi] while maintaining control of the extension in the Chrome webstore (by forking chrismatic’s version back to himself).
  • chrismatic promptly added donate buttons and a “made with love by Chris” note.
  • gorhill took exception to this and asked chrismatic to change the name so people didn’t confuse uBlock (the original, now called uBlock Origin) and uBlock (chrismatic’s version).
  • Google took down gorhill’s extension. Apparently this was because of the naming issue (since technically chrismatic has control of the repo).
  • gorhill renamed and rebranded his version of ublock to uBlock Origin.

Have you looked into configuring them directly from your NVR? Or third party options? I did a quick search and saw a list of several that as far as I can tell can display Reolink streams (though I haven’t confirmed any can configure the cameras):

And some proprietary options that have native Linux builds:


If you use that docker compose file, I recommend you comment out the build section and uncomment the image section in the lemmy service.

I also recommend you use a reverse proxy and Docker networks rather than exposing the postgres instance on port 5433, but if you aren’t familiar with Docker networks you can leave it as is for now. If you’re running locally and don’t open that port in your router’s firewall, it’s a non-issue unless there’s an attacker on your LAN, but given that you’re not gaining anything from exposing it (unless you need to connect to the DB directly regularly - as a one off you could temporarily add the port mapping), it doesn’t make sense to increase your attack surface for no benefit.


I haven’t personally used any of these, but looking them over, Tipi looks the most encouraging to me, followed by Yunohost, based largely on the variety of apps available but also because it looks like Tipi lets you customize the configuration much more. Freedom Box doesn’t seem to list the apps in their catalog at all and their site seems basically useless, so I ruled it out on that basis alone.


I am trying to avoid having to having an open port 22

If you’re working locally you don’t need an open port.

If you’re on a different machine but on the same network, you don’t need to expose port 22 via your router’s firewall. If you use key-based auth and disable password-based auth then this is even safer.

If you want access remotely, then you still don’t have to expose port 22 as long as you have a vpn set up.

That said, you don’t need to use a terminal to manage your docker containers. I use Portainer to manage all but my core containers - Traefik, Authelia, and Portainer itself - which are all part of a single docker compose file. Portainer stacks accept docker compose files so adding and configuring applications is straightforward.

I’ve configured around 50 apps on my server using Docker Compose with Portainer but have only needed to modify the Dockerfile itself once, and that was because I was trying to do something that the original maintainer didn’t support.

Now, if you’re satisfied with what’s available and with how much you can configure it without using Docker, then it’s fine to avoid it. I’m just trying to say that it’s pretty straightforward if you focus on just understanding the important parts, mainly:

  • docker compose
  • docker networks
  • docker volumes

If you decide to go that route, I recommend TechnoTim’s tutorials on Youtube. I personally found them helpful, at least.


This is a very surface level overview of the frameworks it covers. The title is a bit of a reach, as it wouldn’t give anyone enough information to make a more educated decision about which framework to use.

Are you the author? I think it could be improved by including:

  • metrics - number of apps that use each, number of job offerings, github stars
  • who backs each project, and how much can we trust them to continue developing it in a way that’s friendly to developers
  • for React specifically, a bit more info on the prominent frameworks - Next.js, Vite, Gatsby, CRA/CRACO, or ejected CRA - since the difference between them is substantial
  • a high level description of the use case that the framework is designed for, as well as use cases where it isn’t well suited or has drawbacks.
  • how does the development experience differ? Is there a lengthy build step? Does it offer hot reloading? Does it come with a built-in linter or integrate easily with one?
  • Does it have a bundled testing framework, and how does that compare to other offerings? For example, CRA comes with jest and it can be a pain to configure jest to properly handle all of your dependencies - it doesn’t use the same build pipeline as your app and will fail if you’re using newer dependencies that use import statements instead of module.exports and you don’t individually configure each one. Vitest, by contrast, uses the same build pipeline as Vite.
  • Ease of writing unit tests, component tests, and e2e tests (even if that means pulling in another library)
  • ease of use with or without typescript
  • some more substantial example apps per framework, like a to-do list that uses a simple API (preferably the same API in all cases). Currently the examples don’t even show what the code looks like with basic styling

If you are the author, I saw your article on Typescript and would also like to say that you can configure your linter to not warn about using any. There’s even a no-implicit-any rule that you can use if you only want explicit any types but don’t want, for example, responses from API calls to have that type by default.