• 0 Posts
  • 27 Comments
Joined 1Y ago
cake
Cake day: Oct 21, 2023

help-circle
rss

I’m a bit concerned about the block synchronization part. Mainly whether it can be used for some form of enforcement. My major concern is regarding shadow libraries, because I believe knowledge should be accessible to everyone. Can someone familiar weigh on?


It works but I don’t think Forgejo plans to support it in the future. Gitea and Forgejo started to diverge and the documentation regarding docker is somewhat in a deprecated state.

Edit: I also think the OP’s question is different from this. So this might not be a solution.


Adobe uses an exe file of Node.js to do the verification. It is situated in the Adobe installation directory. Block all outbound connections from this exe.


If you meant mega.nz or mega.io , it was founded by him but around 2013 he cut all the ties with the site. Now it is completely independent of him.


Hey I understood what you meant. The result that you are trying to achieve is very close to the browser caching normally present is what I meant. When you zoom in it will only load that area. And I don’t think you can specify the number of tiles to be a specific number, since the zoom levels are not linear.

The offline leaflet I shared in the previous comment actually does the same thing you want to achieve. The difference is the offline mode is discarded immediately when the system is back online. So that library could be modified to incorporate the time dependency and users visiting a point again I specified in the last comment, at least in theory.

Regarding OSM data, there are zip files available for downloading. Geofabrik and openstreetmap.fr are examples. Another tool is Protomaps, where you can download by drawing a polygon. But these are not going to be the ideal solution for a product like Immich.

By the way I saw your update. Great job on following up and providing a fix for others. I really really appreciate it.


If you are asking about vector maps, I am not really sure, because I have no experience with it. So can’t really comment on that. On raster maps, as you already know every tile is a PNG. The behaviour you described is very similar to the client side caching that usually happens in the browser. Depending on the coordinates in the viewport and zoom level the server provides the tiles.

Usually to save the map most offline map making tools will ask you to draw a rectangle and select the required zoom levels. In an interactive map, the rectangle is the viewport of the device. So there can be a feature which will download and store the tiles around a specific gps location for a fixed geographical area. That should be doable without much issue. But in this case that may not be a good idea.

If you visualise all zoom levels stacked over each other, the images need to be retrieved when the user zooms into a point the geographical area will not stay the same. Smaller geographical area is only needed with higher zoom levels. If we only take all the tiles that get downloaded in every layer, it may produce a shape similar to an inverted pyramid. So saving the images as a user zooms in for the first time, may be the best idea.

Then the saved tiles need to be used again when users zoom in the same area. Also these tiles need not be updated frequently and maybe even once in every 3 months might be enough, that too only when the user zooms in again in that area.

This can be a little tricky as almost all the tools that create offline maps do it for a fixed area and selected zoom levels, every point in that area gets equal priority. But in this case the point is the important element. The area nearby may not be relevant at all. So that is the part that needs some exploration.


I read through your comments and the reply from devs regarding OSM. I will add a few points that can be part of the feature request. I have some experience dealing with maps, and my understanding is you can set up an offline version of OSM, which will get updated only when required.

leaflet.offline is a library which provides a similar functionality. I think with some modifications this can be implemented to significantly reduce the load on OSM that using it directly.

Even with a very large zoom level say 11 to 15, a large area of maps takes like a few hundred MBs. We once cached the entire region of California with all the details and it was around 240 MB IIRC. But Immich does not need this much details and it is possible to restrict zoom levels to certain details.

For someone self hosting several hundreds of GBs of photos, this should be doable without using too much storage. I think the problem will be that this is a huge engineering effort. Depending on the priority of the feature it may not be easy to do this.

There is a site called Switch2OSM which details almost everything you need to know. The previous link is on how to serve map tiles on your own. Again it is a daunting task and not suitable for everyone.

If anyone needs a live update of OSM as things get added, look into the commercial offerings.

In conclusion, it is possible to include a highly optimised version of OSM, instead of putting their servers under heavy load. The catch is, it is not easy and will need a huge engineering effort. I think developers should take a call on this.


So many people freaked out by privacy scandals, that made it a billion worth business. first with a massive amount of VPN services, some of them even from the same company and giving them different brands, browsers everywhere with same situation , then self hosting which it seems multiple developers are putting the eye on it as some fresh juice . And FUTO eyes specially

Can you elaborate this a bit more? Are the first few points about Proton, or in general? And I guess in the case of browsers you meant Chromium. Also I’m not really sure about what you meant regarding self hosting.


Excel sheets… I prefer them in tables, rather than plain text. I’m kind of a sysadmin… You know…


Actually you can… I do that with my setup. Just point your domain to the new ip assigned by tailscale to your server. Thats all. Recently they started supporting the https certificate also… Even though it’s not needed, for internal only communication.


Replacing a human with any form of tech has been a long standing practice. Usually in this scenario the profitability or the efficiency takes a known pattern. Unfortunately what you said is the exact way the market always operated in the past, and will be operating in the future.

The general pattern is a new tech is invented or a new opportunity is identified, then a bunch of companies get into the market as competing entities. They offer competing prices to customers in an attempt to gain market dominance.

But the problem starts when low profit drives some companies to a situation where either they have to go bust or dissolve the wing, or sell the company to a competitor. Usually after this point a dominant company will emerge in a market segment. Then the monopolies are created. After this point companies either increase the price or exploit customers to get more money, and thereby start making profits. This has been the exact pattern in tech industries for several decades.

In the case of AI also, this is why companies are racing to capture market dominance. Early adopters always get a small advantage and help them get prominence in the segment.


This is something people always miss in these discussions. A graphic designer working for a medium marketing company is replaceable with a Stable Diffusion or Midjourney, because there, quality is not really that important. They work on quantity and “AI” is much more “efficient” in creating the quantity. That too even without paying for stock photos.

High end jobs will always be there in every profession. But the vast majority of the jobs in a sector do not belong to the “high end” category. That is where the job loss is going to happen. Not for Beeple Crap level artists.


I actually use Nginx. The major advantage is if you have to access something directly. For example a client app in your device wants to access a service you host. In that case Heimdall won’t be enough. You can still use ip with port, but I prefer subdomains. I use Nginx Proxy Manager to manage everything.

Regarding the network going down, the proprietary part of the tailscale is the coordination server. There is an open source implementation of the same, called headscale. If you are okay with managing your own thing, this is an alternative. Obviously the convenience will be affected.

Apart from that, if you haven’t already read this blog post on How tailscale works? I highly recommend reading this. It gives a really good introduction to the infrastructure. Summary is your connections are P2P, using wireguard. I don’t think tailscale will have a failure scenario that easily.

I hope this helps.


The exact setup can be achieved by tailscale, a not really known feature is you can point your domain to the, tailscale IP (new ip assigned by tailscale), and it will act just like a normal hosting setup.

Advantage, any device or someone who you do not pre approve can’t see anything if they go to the domain and subdomain. They only work if you are connected and authenticated to tailscale network. I have a similar setup, if you need more pointers please ping me.


RustDesk

If you have used AnyDesk in the past, this gives the same experience. Recently used it and has a lot of features, including unattended access.

They recommend self hosting an instance for better performance.



One thing you can try out, if you haven’t done already, is configuring 2 different ports for the two users here. GUI has an option to adjust the ports, also you can configure two different services to start depending on the logged in user. I haven’t done it myself on Linux, but it looks like people had success. One R*ddit thread for example,

Syncthing on a multi user Computer


IP should not cause any issues. IDs are just a hash of certificate used by Syncthing. Can you elaborate a little on the current setup? Device, OS, User, etc. Also if possible can you explain your use case? As I mentioned, Syncthing is very specific to what it can do, so it may not be the best solution for your case.


Oh forgot to add that the last case that you mentioned, where multiple users sharing a PC, and keeping the folder in sync with all, is not straightforward. This needs another always-on (server like) device.

At least in Windows each user gets a different Syncthing ID. So if you sync the file with an always-on device, the other user can get the update when they come online from that.


Here one crucial element that needs to be highlighted is Syncthing is decentralised by design. I mean it is different from a server-client way of thinking. It is very much like how git stores content, if you are familiar with it.

For example, let us say I have 5 devices and there is a folder I want in all my devices kept in sync. Since there is no server, to propagate updates made in one device (let us call it Source Device) to other devices, it has to happen either directly, or indirectly. Here I’m assuming all 5 devices are configured to communicate with each other directly.

Whenever one of the other 4 devices (Device 1) becomes ‘online’ at the same time as of Source Device, the sync will happen. This is the direct way. The indirect one is, let us say after the sync happened between Source Device and Device 1, the Source Device goes ‘offline’, but the Device 1 continues to be ‘online’, now if Device 2 comes online, the change will be propagated from Device 1 to Device 2.

Note that the assumption, one device configired with all other devices is not the case, propagation of change has to take a path that similar to indirect way, even if all the devices are simultaneously online.

This configuration, where each device is configured to communicate with all other, is a pain to maintain, since Syncthing is not designed like a publish-subscribe model. What people usually do is, an always-on device (usually a server) is used as one of the devices that need to be kept in sync. Again, this is not a client server model, but each device is a ‘node’, and the always-on device is also just another node.

As you already experienced, it is very easy to get sync conflicts, if a folder is shared between multiple users, because of this decentralised design. In my opinion Syncthing works best for a single user. My use cases are, syncing my notes between pc and mobile, sync files scanned with the mobile to my pc, etc.

If your case is more focused on multiple users, WebDAV server can be an option. But again it’s not straight forward and may not cover all use cases. Depending on what you are trying to achieve, a tool more suitable might be available. For example, if the aim is collaborative development there is Iroh (Still in early stages of development)

I hope this helps.


TBH, it does not look automated. Writing in SCSS, with basic shapes.


If tailscale will suite your need, but facing amy limitations on free plan, an alternative is Innernet https://github.com/tonarino/innernet

Obviously not as user friendly as tailscale and you have to work things out in the command line, just posting as an alternative. That’s all.


I went through the comments briefly and didn’t see anything about cases. I highly recommend spending some money on a good case with good cable management slots. It may not look important but it will make life so much easier. Fractal cases are good budget friendly ones. I usually prefer large bulky cases (I like ATX), to ensure good ventilation and easiness to assemble things. Having plenty of space to move around helps a lot while cleaning also.

Another thing, you will see a lot of articles about positive pressure or negative pressure fan arrangement and all. TBH, I really don’t think that matters a lot. Just regular cleaning with a cheap rocket air blower will do. And more than 4 fans are not really needed. The gain is negligible. But ensure you have a good dedicated fan/water cooling for CPU cooling. I recommend Noctua for air cooling.

Oh regarding backing up your data, make sure you plug in your SSD once in a while, to avoid charge depletion. Nowadays the claim is SSDs have better retention and you don’t need to keep it active, but I’m not really sure.


Can you share the current settings you used? For me a good starting point is a preset with target resolution, and then fine tuning RF value to get a good reduction in size.



Its been like this for a long time. I still find it difficult to access raw.github. the reversal is not proper as far as I can say.

Edit: checked now, still can’t.


Oh fun fact, Govt also issued an order stating that VPN providers who won’t log information of users, can’t function in India.