• 4 Posts
  • 52 Comments
Joined 1Y ago
cake
Cake day: Jul 19, 2023

help-circle
rss

Invidious still seems to work for VODs provided the instance doesn’t get restricted. Livestreams have been broken for ages though.


I don’t really see the advantage here besides orchestration tools unless the top secret cloud machines can still share it’s resources with public cloud to recoup costs?


I have really mixed feelings about this. My stance is that I don’t you should need permission to train on somebody else’s work since that is far too restrictive on what people can do with the music (or anything else) they paid for. This assumes it was obtained fairly: buying the tracks of iTunes or similar and not torrenting them or dumping the library from a streaming service. Of course, this can change if a song it taken down from stores (you can’t buy it) or the price is so high that a normal person buying a small amount of songs could not afford them (say 50 USD a track). Same goes for non-commercial remixing and distribution. This is why I thinking judging these models and services on output is fairer: as long as you don’t reproduce the work you trained on I think that should be fine. Now this needs some exceptions: producing a summary, parody, heavily-changed version/sample (of these, I think this is the only one that is not protected already despite widespread use in music already).

So putting this all together: the AIs mentioned seem to have re-produced partial copies of some of their training data, but it required fairly tortured prompts (I think some even provided lyrics in the prompt to get there) to do so since there are protections in place to prevent 1:1 reproductions; in my experience Suno rejects requests that involve artist names and one of the examples puts spaces between the letters of “Mariah”. But the AIs did do it. I’m not sure what to do with this. There have been lawsuits over samples and melodies so this is at least even handed Human vs AI wise. I’ve seen some pretty egregious copies of melodies too outside remixed and bootlegs to so these protections aren’t useless. I don’t know if maybe more work can be done to essentially Content ID AI output first to try and reduce this in the future? That said, if you wanted to just avoid paying for a song there are much easier ways to do it than getting a commercial AI service to make a poor quality replica. The lawsuit has some merit in that the AI produced replicas it shouldn’t have, but much of this wreaks of the kind of overreach that drives people to torrents in the first place.


If sellers can prove that they never touch a customers home address they’re less exposed to data breaches which might look good on for insurance companies.

Honestly, this sounds it something a shipping company could provide. When you go to use Paypal for example, you get redirected to their site, put in your details and they complete the transaction without the seller knowing your financial data. The same could be done with shipping.


My preference would be for WHOIS data to be private unless the owner wants to reveal who they are. I do think it makes sense to require the owner to provide that information to the registrar so it can be obtained by the courts if needed.


I wish we had something like temporary/alias e-mail addresses for physical addresses. So you go to ship something, you provide a shipping alias which the shipping company then derives the true address from and ships the item. The moment the true address is revealed, the alias expires and can no longer be used. This way only the shipping company gets to know your real address and that is ideally discarded once the order has been completed. So forward shipping without the extra step.


This is why people say not to use USB for permanent storage. But, to answer the question:

  • From memory, “nofail” means the machine continues to boot if the drive does not show up which explains why it’s showing up as 100GB: you’re seeing the size of the disk mounted to / .
  • If the only purpose of these drives is to be passed through to Open Media Vault, why not pass through the drives as USB devices? At least that way only OMV will fail and not the whole host.
  • Why USB? Can the drives ve shucked and connected directly to the host or do they use a propriety connector to the drive itself that prevents that?

That’s not really fair on Discord. The article mentions they received an injunction to remove the content so they were forced to do this. Anybody in the same jurisdiction would have to do the same:

“Discord responds to and complies with all legal and valid Digital Millennium Copyright Act requests. In this instance, there was also a court ordered injunction for the takedown of these materials, and we took action in a manner consistent with the court order,” reads part of a statement from Discord director of product communications Kellyn Slone to The Verge.


It does have to do with being a walled platform though. You as the Discord server owner have zero control over whether or not you are taken down. If this was Lemmy or a Discourse server (to go with something a little closer to walled garden) that they ran, the hosting provider or a court would have to take them down. Even then the hosting provider wouldn’t be a huge deal since you could just restore backup to a new one Pirate Bay style. Hell, depending on whether or not the devs are anonymous (probably not if they used Discord), they could just move the server to a new jurisdiction that doesn’t care. The IW4 mod for MW2 2009 was forked and the moved to Tor when Activision came running for them so this isn’t even unprecedented.


Oh I completely misunderstood! I thought it was a forwarder, not dynamic DNS. My bad! Makes total sense!


Out of curiosity, why use a forwarder if you run your own DNS? Why not handle resolutions yourself?


Soooo…. the work of self-hosting with none of the benefits? It sounds like this has all the core problems of Twitter.


The more SSIDs being broadcast the more airtime is wastes on broadcasting them. SSIDs are also broadcast at a much lower speed so even though it’s a trivial amount of data, it takes longer to send. You ideally want as few SSIDs a possible but sometimes it’s unavoidable, like if you have an open guest network, or multiple authentication types used for different SSIDs.


The APs know who the Wi-Fi clients are and just drops traffic between them. This is called client/station isolation. It’s often used in corporate to 1) prevent wireless clients from attacking each other (students, guests) and 2) to prevent broadcast and multicast packets from wasting all your airtime. This has the downside of breaking AirPlay, AirPrint and any other services where devices are expected to talk to each other.


When buying disks do some research for the exact model to ensure they are not SMR drives if you plan on using them in RAID. Some manufacturers will not tell you if they are SMR drives and this can do anything from tank write performance to make the RAID reject the drive entirely.

See: https://arstechnica.com/gadgets/2020/04/caveat-emptor-smr-disks-are-being-submarined-into-unexpected-channels/



Good! You wanna automate away a human task, sure! But if your automation screws up you don’t get to hide behind it. You still chose to use the automation in the first place.

Hell, I’ve heard ISPs here work around the rep on the phone overpromising by literally having the rep transfer to an automated system that reads the agreement and then has the customer agree to that with an explicit note saying that everything said before is irrelevant, then once done, transfer back to the rep.


I used to have all VMs in my QEMU/KVM server on their own /30 routed network to prevent spoofing. It essentially guaranteed that a compromised VM couldn’t give itself the IP of say, my web server and start collecting login creds. Managing the IP space got painful quick.


Run at home/lab to learn AD and also gives you a place to test out ideas before pushing to production. You may be able to run a legit AD server with licensing on AWS or similar if they have a free tier.


Damn! Using .af for a LGBT+ site is insane! The country could have redirected the domain to their own servers and started learning the personal details of those on the site who I imagine wouldn’t be terribly thrilled having an anti-LGBT+ government learn their personal information (namely information not displayed publicly). Specifically, they could put their own servers in front of the domain so they can decrypt it, then forward the traffic on to the legitimate servers, allowing them to get login information and any other data which the user sends or receives.


Buying your own domain often includes DNS hosting but that’s not really the point unless all you’re doing is exclusively running an externally-facing website or e-mail. The main reason for buying a domain online is so everybody else recognises you control that namespace. As a bonus, it means you can get globally-cognised SSL certificates which means you no longer have you manage your own CA and add it’s root to all the devices which wish to access your services securely. It’s also worth noting that you cannot rely on external DNS servers for entries that point to private IPs, because some DNS servers block that.


A good move!

I’m surprised they didn’t codify “.lan” though since that one is so prevalent.


People who do not wish to buy a GTLD can use home.arpa as it is already reserved. If you are at the point of setting up your own DNS but cannot afford $15 a year AND cannot use home.arpa I’d be questioning purchasing decisions. Hell, you can always use sub-domains in home.arpa if you need multiple unique namespaces in a single private network.

Basically, if you’re a business in a developed country or maybe developing country, you can afford the domain and would probably spend more money on IT hours working around using non-GTLDs than $15 a year.


If your domain will NEVER send e-mail out, you only really need and SPF record to tell other servers to drop e-mail FROM your domain. Even that’s somewhat optional. If you ever plan on sending ANY outbound (you should at very least for the occasional ticket) then do DKIM, DMARC and SPF. The more of these you do, the less likely e-mails FROM your domain are to be flagged as spam.


Some servers blacklist you no matter what you do because you’re not a big player in the e-mail space… Outlook. Fuck Outlook. M365 doesn’t do that though.

Also the idea that reverse IPs are needed (in practice) when SPF, DKIM and DMARC are in use is insane. I have literally told you my public key and signed the e-mail. It’s me. You don’t need to check the damn PTR!



I feel like there’s more to your question but here goes with the starter answer: install https://github.com/LizardByte/Sunshine on the computer which is running the game and https://github.com/moonlight-stream/moonlight-qt on the machine which will receive the game stream. I have Sunshine installed in a VMware Fusion VM running Windows which I stream to the host Mac since Discord doesn’t let you screenshare VMs with sound otherwise. I have also used Moonlight on my Mac to stream games from a cloud machine on https://airgpu.com but only played with it a tiny bit as a substitute for running my own game streaming machine in AWS or for some games that aren’t on GeForce NOW.


I don’t have a problem with training on copyrighted content provided 1) a person could access that content and use it as the basis of their own art and 2) the derived work would also not infringe on copyright. In other words, if the training data is available for a person to learn from and if a person could make the same content an AI would and it be allowed, then AI should be allowed to do the same. AI should not (as an example) be allowed to simply reproduce a bit-for-bit copy of its training data (provided it wasn’t something trivial that would not be protected under copyright anyway). The same is true for a person. Now, this leaves some protections in place such as: if a person made content and released it to a private audience which are not permitted to redistribute it, then an AI would only be allowed to train off it if it obtained that content with permission in the first place, just like a person. Obtaining it through a third party would not be allowed as that third party did not have permission to redistribute. This means that an AI should not be allowed to use work unless it at minimum had licence to view the work. I don’t think you should be able to restrict your work from being used as training data beyond disallowing viewing entirely though.

I’m open to arguments against this though. My general concern is copyright already allows for substantial restrictions on how you use a work that seem unfair, such as Microsoft disallowing the use of Windows Home and Pro on headless machines/as servers.

With all this said, I think we need to be ready to support those who lose their jobs from this. Losing your job should never be a game over scenario (loss of housing, medical, housing loans, potentially car loans provided you didn’t buy something like a mansion or luxury car).


We’re going to hold this song back from you and ask for a bunch of your details so you can listen to it once we’ve generated some extra hype. Pretty cool huh?!


The article seems to indicate they are using to reduce the amount of work that have to do in writing prompts, but still have translators review what the AI spits out. I think that’s different to SuperDuo which I believe is mean’t to use AI to be more conversational.


  • Personal and business are extremely different. In personal, you backup to defend against your own screwups, ransomware and hardware failure. You are much more likely to predict what is changing most and what is most important so it’s easier to know exactly what needs hourly backups and what needs monthly backups. In business you protect against everything in personal + other people’s screwups and malicious users.
  • If you had to setup backups for business without any further details: 7 daily, 4 weekly, 12 monthly (or as many as you can). You really should discuss this with the affected people though.
  • If you had to setup backups for personal (and not more than a few users): 7 daily, 1 monthly, 1 yearly.
  • Keep as much as you can handle if you already paid for backups (on-site hardware and fixed cost remote backups). No point having several terabytes of free backup space but this will be more wear on the hardware.
  • How much time are you willing to lose? If you lost 1 hour of game saves or the office’s work and therefore 1 hour of labour for you or the whole office would it be OK? The “whole office” part is quite unlikely especially if you set up permissions to reduce the amount of damage people can do. It’s most likely to be 1 file or folder.
  • You generally don’t need to keep hourly snapshots for more than a couple days since if it’s important enough to need the last hours copy, it will probably be noticed within 2 days. Hourly snapshots can also be very expensive.
  • You almost always want daily snapshots for a week. If you can hold them for longer, then do it since they are useful to restoring screwups that went unnoticed for a while and are very useful for auditing. However, keeping a lot of daily snapshots in a high-churn environment gets expensive quickly especially when backing up Windows VMs.
  • Weekly and monthly snapshots largely cover auditing and malicious users where something was deleted or changed and nobody noticed for a long time. Prioritise keeping daily snapshots over weekly snapshots, and weekly snapshots over monthly snapshots.
  • Yearly snapshots are more for archival and restoring that folder which nobody touched forever and was deleted to save space.
  • The numbers above assume a backup system which keeps anything older than 1 month in full and maybe even a week in full (a total duplicate). This is generally done in case of corruption. Keeping daily snapshots for 1 year as increments is very cheap but you risk losing everything due to bitrot. If you are depending on incrementals for long periods of time, you need regular scrubs and redundancy.
  • When referring to snapshots I am referring to snapshots stored on the backup storage, not production. Snapshots on the same storage as your production are only useful for non-hardware issues and some ransomware issues. You snapshots must exist on a seperate server and storage. Your snapshots must also be replicated off-site minus hourly snapshots unless you absolutely cannot afford to lose the last hour (billing/transaction details).

I wish XMPP had stuck around. I used to run a Prosody server and it worked well enough but I think the E2E keys would occasionally need to be fixed. I used Conversations on Android as a client at the time. The things that makes me hesitate to dedicate too much effort to Matrix are:

  1. the supposed funding issues they’re having (which is part of why I paid for hosting)
  2. the FOSS’ communities seeming tendency to keep jumping messaging platforms and so there’s never a chance for one to gain critical mass
  3. how buggy the web client and Element iOS client have been.

When I stopped running an XMPP server I switched the only other user over to Signal and we’ve stuck there since. With how buggy the Element iOS client, Fluffy Chat and web client have been for me (app crashes when joining rooms, rooms don’t exist when they in fact do), I don’t want to risk an upset by trying to push people there since Signal is good enough. And these are all issues that exist when the company who makes Matrix (plus contributors of course) are the ones running the server.

At this point I’m just inclined to grab the export they provide and switch to matrix.org for the 1 or 2 rooms I care to have a presence in.


(15th of Dec) Element Discontinuing Hosted Matrix for Consumers
![](https://lemmy.conorab.com/pictrs/image/346590b1-74bc-428a-928d-ee7753275610.jpeg) ![](https://lemmy.conorab.com/pictrs/image/0542e769-08d1-4d40-8307-071008a7a0b3.png)
fedilink

Unless I am mistaken, the total number the other comment is raising is how much power the entire network spent calculating the transaction, not how much the winner (the one who got paid out) spent. You calculate the energy consumption of the entire network because that power was still spent on the transaction even if the rest of the network wasn’t rewarded. I have no idea if the numbers presented are correct but the reasoning seems sensible. Maybe I’m wrong though. :)


Fair. But I would say they have a disincentive to lie about E2E because it’s a selling point of WhatsApp and if they didn’t care they could just roll WhatsApp into Facebook Messenger where there is no promise of E2E.


WhatsApp claims to be E2E/not readable by Facebook and to my knowledge, all we have to the contrary is speculation provided you verify the keys on both ends (same as Signal). Facebook might know who you’re messaging but that’s also true for Signal. I’d still 100% trust Signal over WhatsApp given Facebook’s massive conflict of interest, but SMS has been known-bad and collected by the NSA for a decade now. US telecommunications companies also have a terrible reputation for privacy. The only advantage it has over any other platform is portability between providers but even that falls to the side since you can have multiple messaging apps at once.


Invidious has been a saviour for me on mobile. The ads were so painfully long. To make it worse, I’d use YouTube to help fall asleep, adjust it to the right volume, then BAM! Loud advert. I didn’t use an ad blocker on PC for ages because I get that bandwidth is expensive as hell, but they really started taking the piss and I gave up.


I get them every now and then, but refreshing the page fixes it every time. I’m not logged in to a Google account and my browser is set to clear cookies for all but a few domains on restart, so maybe that’s contributing? Don’t restart often though.


Client Hello is one of the ways firewalls figure out what site you’re going to in order to block it from memory (its possible I’m getting this confused for a different request). Curious to see the impact of this.




cross-posted from: https://lemmy.conorab.com/post/35638 > In all its framerate-killing glory!
fedilink

cross-posted from: https://lemmy.conorab.com/post/12313 > I visited Usti nad Labem back in June while in Europe after being inspired by https://www.youtube.com/watch?v=VLhCNEpcPO4 and https://www.reddit.com/r/dayz/comments/5dldfi/chernarus_real_life_map_with_in_game_locations/ and figured I'd post my photos here in case it inspires somebody else! > > The link goes to a gallery of almost all the videos and photos I took while there as well as some videos. You can click on the map icon (to the right of the title at the top-left) to see every photo on a map. The Arma 2/DayZ locations can be found at https://www.google.com/maps/d/viewer?mid=1EJNBRC6X6C2P6Q1MGrsOb8Zynt4&ll=50.71286861566866%2C14.120705128839054&z=12 (posted in the Reddit link above). > > Unfortunately the videos can't be put on a map, so here goes!: > * The first 3 videos (IMG_5980, IMG_5981) are the train ride from Decin (around Rify) to Usti nad Labem (roughly Balota airfield). > * IMG_5987 and IMG_5995 are at Usti nad Labem station. > * IMG_6043 is at the east-most part of Usti nad Labem (roughly Balota airfield) near the river. > * IMG_6255 is the road between Usti nad Labem-Nestmice (Cherno) and Mirkov (Mogilevka) > * IMG_6259 is at Zricenina hradu Blansko (Zub castle). > * IMG_6274 is a drive between Mirkov (Mogilevka) to Slavosov (Novy Sobor). > * IMG_6282 is a drive between Slavosov (Novy Sobor) and Lipova (Stary Sobor) > * IMG_6292 is a drive from Lipova (Stary Sobor) to Statek Libov (Rogovo), Radesin (Pogorevka) and the intersection between Chuderov (Zelenogork), Green Mountain, Radesin (Pogorevka) and Chuderov - Sovolusky (Pulkovo). > * IMG_6404 is a drive from Javory (Gorka) to Malsovice (Berezino). > * IMG_6468 is a drive from Jilove (Gvozdno) to Krasny Studenec (Krasnostav). Turns out this isn't a paved road like it is in the game, and nor is (at least some) of the road between Krasny Studenec (Krasnostav) to Stara Bohyne (Dubrovka). We didn't go down this road though. > * IMG_6469 is a drive through Krasny Studenec (Krasnostav). > * IMG_6493 is a drive along the river and Malsovice (Berezino). > * IMG_6494 is around Malsovice (Berezino). > * IMG_6496 and IMG_6497 are a drive from Malsovice (Berezino) to the dam at Povrly (Elektro) via Borek (Orlovets) and Hlinena (Polana), Dobkovice (Solnichniy) and Roztoky (Kamyshovo). > * IMG_6522 is a drive from the dam at Povrly (Elektro) to Masovice (Pusta). >
fedilink

Wallpaper Memories
Inspired to make this post from: [https://lemmy.ml/post/2769734](https://lemmy.ml/post/2769734) Do you have any memories that spring to mind when you see old wallpapers? * The green rolling hills of XP remind me of when I started using computers, watching Insider Secrets on CNET and downloading everything that appeared on download.com, then trying to make Windows XP look like Vista * Vista’s of when I installed every possible custom theme imaginable and spent half the time rebooting from BSODs while trying to play Zombie Escape in CSS * Windows 7 of what *felt* like peak Windows and when I got my first gaming PC and the joys of Bootcamp (never forget the Windows 7 Beta fish wallpaper), * Mac OS Leopards wallpaper of my first Mac, * Ubuntu 9.04: The *classic* Ubuntu where I had no idea what I was doing and I had no idea how to get Wi-Fi and sound to work, * Ubuntu 10.04 of when I first started running Minecraft servers and using Linux, not to forget glorious GNOME 2, * Debian 6 of when I started learning Debian and the fun that was trying use PPAs and custom repos on Apt and running servers in VirtualBox, * Mac OS Mavericks: Nice network share, would be a shame if it stopped responding and you had to reboot… again, * Windows 8 (not 8.1)… I don’t actually remember these . I used a screenshot from DayZ looking down at Elektro back when I ran Windows 8 consumer preview and the release candidate.
fedilink