• 1 Post
  • 10 Comments
Joined 1Y ago
cake
Cake day: Jul 09, 2023

help-circle
rss

it doesn’t unravel the underlying complexity of what it does… these alternative syntaxes tend to make some easy cases easy, but they have no idea what to do with more complicated cases

This can be said of any higher-level language, or API. There is always a cost to abstraction. Binary -> Assembly -> C -> Python. As you go up that chain, many things get easier, but some things become impossible. You always have the option to drop down, though, and these regex tools are no different. Software development, sysops, devops, etc are full of compromises like this.


Thanks! I am in the US. I’ve never heard of RH, but I’ll definitely check them out.


Tips for getting contract work
I'm looking for part-time and/or short term contract work, but having a hard time because all the major job sites have either no ability to filter, or the posters just select every option so their post shows up in every search. Does anyone have any tips on how to find this kind of work? Is it best to source it on my own, or are there good agencies to work with? I'm looking for any kind of developer roll (I've done backend and full stack), and am open to mentoring/tutoring as well.
fedilink

I noted in another comment that SearXNG can’t do anything about the trackers that your browser can’t do, and solving this at the browser level is a much better solution, because it protects you everywhere, rather than just on the search engine.

Routing over Tor is similar. Yes, you can route the search from your SearXNG instance to Google (or whatever upstream engine) over Tor, and hide your identity from Google. But then you click a link, and your IP connects to the IP of whatever site the results link to, and your ISP sees that. Knowing where you land can tell your ISP a lot about what you searched for. And the site you connected to knows your IP, so they get even more information - they know every action you took on the site, and everything you viewed. If you want to protect all of that, you should just use Tor on your computer, and protect every connection.

This is the same argument for using Signal vs WhatsApp - yes, in WhatsApp the conversation may be E2E encrypted, but the metadata about who you’re chatting with, for how long, etc is all still very valuable to Meta.

To reiterate/clarify what I’ve said elsewhere, I’m not making the case that people shouldn’t use SearXNG at all, only that their privacy claims are overstated, and if your goal is privacy, all the levels of security you would apply to SearXNG should be applied at your device level: Use a browser/extension to block trackers, use Tor to protect all your traffic, etc.


Can you describe your use case more?

I don’t think format matters - if you’ve got multiple processes writing simultaneously, you’ll have a potential for corruption. What you want is a lock file. Basically, you just create a separate file called something like my process.lock. When a process wants to write to the other file, you check if the lock file exists - if yes, wait until it doesn’t; if no, create it. In the lock file, store just the process id of the file that has the lock, so you can also add logic to check if the process exists (if it doesn’t, it probably died - you may have to check the file you’re writing to for corruption/recover). When done writing, delete the file to release the lock.

See https://en.wikipedia.org/wiki/File_locking, and specifically the section on lock files.

This is a common enough pattern there are probably libraries to handle a lot of the logic for you, though it’s also simple enough to handle yourself.


They are explicitly trying to move away from Google, and are looking for a new option because their current solution is forcing them to turn off ad-blocking. Sounds to me like they are looking for a private option. Plus, given the forum in which we are having the discussion (Lemmy), even if OP is not specifically concerned with privacy, it seems likely other users are.

As for cookies, searxng can’t do any more than your browser (possibly with extensions) can do, and relying on your browser here is a much better solution, because it protects you on all sites, rather than just on your chosen search engine.

“Trash mountain” results is a whole separate issue - you can certainly tune the results to your liking. But literally the second sentence of their GitHub headline is touting no tracking or profiling, so it seems worth bringing attention to the limitations, and that’s all I’m trying to do here.


You mean between their instance and the final search engines? Or between them and a public instance of searxng?

In either case, I’m not sure it buys you anything in terms of privacy you wouldn’t get by using the VPN and going directly to the search engines.


It looks like a few people are recommending this, so just a quick note in case people are unaware:

If you want to avoid being tracked, this is not a good solution. Searxng is a meta search engine, meaning it is effectively a proxy: you search on Searxng, it searches multiple sites and sends all the results back to you. If you use a public instance, you may be protected from the actual search engine*, because many people will use the same instance, and your queries will be mixed in with all of them. If you self host, however, all the searches will be your own - there is then no difference between using Searxng and just going to the site yourself.

*The caveat with using the public instances is while you may be protected from the upstream engine, you have to trust the admins - nothing stops them from tracking you themselves (or passing your data on).

Despite the claims in their docs, I would not consider this a privacy tool. If you are just looking for a good search engine, this may work, and it gives you flexibility and power to tune it yourself. But it’s probably not going to do anything good for your privacy, above and beyond what you can get from other meta search engines like Startpage and DuckDuckGo, or other “private” search engines like Brave.


it has its flaws.

Yep yep. I was aware of some of what you pointed out - I think this might be a “perfect is the enemy of good” scenario, though. GitHub alone accounts for over 84% (based on the awesome-selfhosted-data repo):

$ grep -r 'source_code_url' | cut -d ' ' -f 2 | cut -d '/' -f 3 | sort | uniq -c | sort -rn | head -n 15
   1068 github.com
     36 gitlab.com
      7 git.mills.io
      6 sourceforge.net
      6 framagit.org
      4 www.atlassian.com
      4 codeberg.org
      3 git.drupalcode.org
      3 git.cloudron.io
      2 repos.goffi.org
      2 git.tt-rss.org
      2 git.sr.ht
      2 cvsweb.openbsd.org
      1 yetishare.com
      1 www.wiz.cn

$ python -c "print($(grep -r 'source_code_url' . | grep github.com | wc -l) / $(ls -1 | wc -l))"
0.8422712933753943

Adding in gitlab gets you to 87%:

$ python -c "print($(grep -r 'source_code_url' . | grep -i -e github.com -e gitlab.com | wc -l) / $(ls -1 | wc -l))" 0.8706624605678234

Also popularity != quality.

True, but a thriving community generally means more resources, guides, etc, which can be important, especially for self-hosted solutions.

In any case, the project is great, and much appreciated. Additionally, the enriched html version looks fantastic, and exposes most of the metadata* I’d want to see, regardless of how it’s sorted.

*One other item to track, that I thought about after making my previous comment - number of contributors. It gives an additional data point on the size of the community, as well as an idea of how many people can be hit by busses before the continued development of the project gets called into question.


I would imagine the source for most projects is hosted on GitHub, or similar platforms? Perhaps you could consider forks, stars, and followers as “votes” and sort each sub category based on the votes. I would imagine that would be scriptable - the script could be included in the awesome list repo, and run periodically. It would be kind of interesting to tag “releases” and see how the sort order changes over time. If you wanted to get fancy, the sorting could probably happen as part of a CI task.

If workable, the obvious benefit is you don’t have to exclude anything for subjective reasons, but it’s easier for readers of the list to quickly find the “most used” options.

Just an idea off the top of my head. You may have already thought about it, and/or it may be full of holes.


I like pass, It’s just a wrapper around standard tools - gpg encrypted files in a directory, with git for version control. You can organize the subfolders however you’d like, and store whatever you want in them. You can sync the files across systems however you’d like - copy/paste, rsync, network drive… You can even go as far as to install a git server, e.g. gitlab, and clone, push, and pull into password synchronization bliss.