This is actually a great question, in the context of the Fediverse.
Usually, every social network or forum has in their ELUA that anything you post is theirs, and you can’t do anything about i.e Reddit using your data to train AIs.
Hlwever, here, we’re on private instances of regular people. We can make our own rules, can’t we? If an instance would say that anything you post is copyrighted by the author, i.e by CC, would it be enforcable if someone would decide to scrape (or repost) the content for profit?
For anyone wondering - why would I need it? I’m already signed in to github, the commit is commited using my ssh-key, Github knows it’s me. Why would I need another verification?
Here’s why. https://dev.to/martiliones/how-i-got-linus-torvalds-in-my-contributors-on-github-3k4g . If someone commits with your email (or github noreply email, which is public), it will get attributed to you. I was just trying it with colleauges account, and so far I haven’t found any way how to tell that it really wasn’t him.
If it has my username, on GitHub, you’re confident it’s my commit.
Apparently, that’s not true: https://dev.to/martiliones/how-i-got-linus-torvalds-in-my-contributors-on-github-3k4g
However, it’s a pretty old article - maybe it’s already fixed? I’ll have to try that.
EDIT: It still works, and you can just use the github noreply address, which is ID+username@users.noreply.github.com . The commit gets linked to their profile, and is shown on their profile page, has their username and profile picture. I haven’t figured out any difference between legit and impersonated commit so far, but maybe it’s hidden somwhere in the repo administration.
So, there you have it. That’s what PGP signing is for.
You actually don’t need a Youtube Account, unless you are a paying subscriber to some creators!
Check out FreeTube, it’s a desktop app similar to NewPipe on Android, that allows you to subscribe to creators while still not requiring an account, and without ads.
As for Android, I don’t know what phone you have, but if you’re ever buying a new one, I highly recommend just getting a (paradoxically) Google Pixel and installing GrapheneOS. An older Pixel is OK, just check which versions are still supported and for how long on the Graphene website. And the installation is super easy, and can actually be done in a browser without any issues, and takes like 15 minutes.
I’ve recently switched to Graphene and it’s amazing. I have a separate profile for apps that refuse to work without Google Services, so they are contained, and additionally Graphene sandboxes the google play services, so they can’t do anything you won’t let them, in contrast to any other Android phone where Google Services can basically do whatever they want without any way to limit it.
I also run Mullvad VPN on my phone all the time, but I don’t think that it’s neccessary.
You are right I shouldn’t have equaled bitcoin with the rest of the crypto ecosystem. While most crypto is utter scam, it’s true that there have been some slight advances here and there, and there are coins that may be actually useful for some cases, mostly Monero and I suppose Ethereum. I’d still say that crypto has done more harm than good in the world, and I say that as someone who’s really focused at privacy, care about it a lot and have invested significant amount of time and effort into staying as private as possible.
But it’s great that Ethereum managed to solve most of the issues with Bitcoin - unless I’m mistaken, it’s not really used for investment speculation, and if it managed to keep the energy requirements low, that’s good. But last time I remember researching about blockchain (it was few months, so feel free to correct me), isn’t it running into serious issues with ledger size, that makes it infeasible for long-term (decades) of use, without sacrificing some of it’s guarantees? Which is one of the main issues with blockchain tech in general, that I don’t think has been solved so far.
I just hope bitcoin will finally die. It’s literally just wasting absurd amount of energy, only to allow scammers to scam billions of dollars from victims, and regular people to steal from eachother by investing into it. I mean, if the only use of bitcoin by now is for speculation and investment, then it means that any dollar you made, you literally stole from someone else who will be left with useless bitcoin once it’s all over. There’s no value, and with the ledger getting bigger and bigger, and bitcoin more expensive to mine, it will eventually be worthless. And we all know it, so anyone who makes thousands of dollars, there’s someone who probably financially ruined himself by making a wrong and stupid investment at the wrong time.
I hate crypto so much :D.
700kWh per transaction? That’s absurd amount of power. That’s 70 EUR of energy per one transaction at current (EU) exchange price.
Is there anyone here knowledgeable enough about this issue to say whether those numbers are correct, or just an overestimate? It feels wrong.
After several of my favorite songs disappeared from Spotify, I’ve adopted a different approach to music.
If I see on on a band show merch stand, I buy a cassette. It’s more of a novelty item and a way to slightly support the band. While I do have a portable tape player, I only rarely take it out. I switched from LPs to tapes because of the costs and huge effort associated with playing or storing them (that is, if you do it right are are not OK with fucking up your LPs), but tapes are cool and don’t have that many storage or playing problems.
Other than that, I’ve stopped paying for any kind of streaming services, and save the 10$ per month to just buy one or two (new or old) albums from my favourite artists on Bandcamp, that I’ve spend the last month listening to the most. The albums I buy I add to my NAS library, which usually replaces stolen copies of said albums that I’ve previously got from Redacted.
This allows me to keep a pretty expansive library, by just stealing what I need, but with a promise that I’ll eventually buy the album (using the money I saved on streaming services), if it’s something that I’ve listened to extensively. I’m also not at mercy of streaming services, that can take away my music whenever they decide to.
So far I’ve been doing this for a few years, and even increased my budget for just buying albums if I can’t immediately find them on Redacted.
I was working on a pretty well known game, porting it to consoles.
On PS4 we started getting OOM crashes after you’ve played a few levels, because PS4 doesn’t have that much memory. I was mostly new on the project and didn’t know it very well, so I started profiling.
It turned out that all the levels are saved in a pretty descriptive JSON files. And all of them are in Unity’s Scriptable Objects, so even if you are not playing that level, they all get loaded into memory, since once something references a SO, it gets loaded immediately. It was 1.7Gb of JSON strings loaded into memory once the game started, that stays there for the whole gameplay.
I wrote a build script that compresses the JSON strings using gzip, and then uncompresses it when loading the actual level.
It reduced the memory of all the levels to 46Mb down from 1.7Gb, while also reduced the game load by around 5 seconds.
This is my experience as well. I’ve always tried to be privacy-conscious, and stick to self-hosted alternatives or FOSS, but I was also lazy and didn’t really tried too hard. With the recent enshittification problems for almost every product that has a corporation behind it, it’s a lot more in my face that it’s shit and I should be dealing with it.
It made me finally get a VPN and switch to Mullvad browser. Get rid of Reddit completely. I finally got a Pixel with GrapheneOS and got a NAS running.
It’s also doing wonders for my digital addiction. The companies are grossly mistaken in assuming that my addiction to their service is greater than my immense hatred for forced monetization, fingerpriting and dark patterns. It’s turning out it’s not, and I’ve dropped so many services in the last few months I never was able to really stop using, most of them thanks to popups like “You have to log in to view this content” or “This content is available only in app”, or “You are using an adblocker…”. Well, fuck you. I didn’t want to be here anyway.
I’ve been mostly working in C# for the past few years (and most of my life), and the only C++ experience I have is from college, so it’s getting some using to. And that’s what I was getting at - thanks to college, where I was forced to really learn (or at least, understand and be able to use) a wide range of drastically different languages, from Lisp through Bash, Pharo, Prolog, to Java and C#, that when I have to write something in a language I don’t know, it’s usually similar to at least one of them and I always could figure it out intuitively.
With Rust, even though it has an amazing compiler, I’m struggling - probably because of the borrowing and overly careful error handling being concepts I’ve never had to deal with to get a MVP code working. Sure, that probably means that the code wasn’t error-proof, which is exactly what Rust forces you to do and which is amazing, but it makes it a lot harder to just write a single script without prior knowledge when you have to.
I hope they are teaching Rust at universities now, we definitely didn’t have it 8 years ago, which is a shame.
I was just thinking about something similar in regards to gamedev.
For the past few years since college, we’ve been working on a 2D game in our spare time, running on Unity. And for the past few months I’ve been mostly working on performace, and it’s still mind-boggling to me how is it possible that we’re having troubles with performance. It’s a 2D game, and we’re not even doing that much with it. That said, I know it’s mostly my fault, being the lead programmer, and since most of the core system were written when I wasn’t really an experienced programmer, it shows, but still. It shouldn’t be that hard.
Is the engine overkill for what we need? Probably. Especially since it’s 2D, writing our own would probably be better - we don’t use most of the features anyway. The only problem would be tooling for scene building, but that’s also something that shouldn’t be that hard.
The blog post is inspiring, just yesterday I was looking into what would I need to get a basic rendering done in Rust, I may actually give it a try and see if I can make a basic 2D engine from scratch, it would definitely be an amazing learning experience. And I don’t really need that many features, right? Rendering, audio, sprite animation, collisions and scene editor should be sufficient, and I have a vague idea about how would I write each of those features in 2D.
Hmm. I wonder what would be the performance difference if I got an MVP working.
I’ve just started learning Rust, mostly by experimenting with winapi since that what’s I’m mostly going to use it for anyway, but this finally explains why I had so much trouble with trying to intuitively winging it. I’ve skimmed through the Rust book once, but judging by this article it’s no wonder I was mostly wrestling the compiler.
Looks like I have to go back to the drawing board. I understand why is Rust doing it, and I’m sure that once I finally get used to it, it’s going to be a way smoother experience, but maan, this is the first language I couldn’t just figure out in an hour. It’s a frustrating learning experience, but I also see why it’s neccessary and love it for that.
I disagree. I’ve been/am working on several pretty large projects in Unity (some of them sold hundreds of thousands copies), and especially once you start porting to consoles, the experience goes to shit. Their support is vague, documentation is plainly wrong in some places - I’ve once spent few days figuring out how to use a documented and explained feature, only to find out later that there’s a closed few years old bug on their issue tracker that it’s actually not supported, and the documentation only does not explains it very well. (The feature was multiple hits per single Raycast in jobs, here are the docs. According to the bug resolution, only one hit per ray is supported, and the docs only don’t explain it very well. The docs are still the same.)
You also inevitably run into issues that you simply don’t have in other engines - it’s closed source. You have no idea how is something implemented, or whether something isn’t working because you are doing it wrong, or if it’s Unity bug/fault. In Unreal, if something doesn’t work, you can always just check the engine code, and either fix it yourself, or better understand why it’s not working. If you need to slightly modify some engine behavior, you’re out of luck with Unity - you have to resort to ugly hacks that sometimes work, but usually at a cost. In Unreal, you just modify the engine code and be done with it.
Trusting Unity with any feature is also a gamble. Have you started developing a multiplayer game on Unet? Tough, we don’t want to support that anymore. But, we will create a better multiplayer system, just wait for it! Then they removed Unet, and the new networking relacement is widely regarded as pretty much unusable - or at lest it was last time I checked. Thankfully, there are a few amazing open source networking addons.
In general, while Unity is an ok-ish game engine for smaller hobby projects (but for that, Godot is better), it’s really an awful and frustrating experience once your project size grows and you need to build bigger games, or if you start porting your games to consoles.
And it’s also really apparent from the way they communicate and threat you company that they don’t give a fuck and only want your money.
Exactly. To me, this explanation sounds like they’ll just magically estimate the numbers without really being able to prove it. And that sucks.
However, we can be sure that developers will have their own analytics, that are probably way more accurate and they know exactly how many people have played or installed their game. And I’m betting that this number will be a lot smaller than the Unity “estimation”, and people will get even more angry.
Cries in game dev
No, seriously. I’ve tried getting Unity to work on Linux once, and gave up after few hours of random crashes, bugs or errors. And I never even got to building the game, which I’m sure would be an entirely different adventure that would still in the end require to reboot to Windows and try the build there.
Also, getting O365 to work on Linux was another reason why I eventually gave up, since our company is simply a Windows-based, and the web apps are just too cubersome to use. And for alternative clients you usually need an app password (disabled in our domain) or another setting that you don’t want to enable for 95% of your employees, since it’s just a security risk in the wrong hands.
Oh, and then there are VPNs. I never managed to get Checkpoint mobile working on Linux, without it also requiring intervention from IT to enable some obscure configuration or protocol support.
It’s a shame, but every attempt I made to switch ended exactly the same - after few days of running into “make sure to enable this config on the server side” or “if you don’t see that option in the settings, contact your system administrator” for every tool I need for my job, I just gave up.
But I’m considering it giving it another try, and just go with the Unix + Windows VM for administrative tasks. But knowing myself, just the small hurdle of “having to spin up a VM” would be a reason to postpone and not do it properly, since that’s additional effort… And then there’s still the gamedev I do part-time, where I simply don’t believe it’s a good idea - after all, given the states the engines are in, it’s a recipe for disaster of “works on my machine but not in build” or “doesn’t work on my machine”…
Do I understand it right that what the tool does is include install scripts in all of the other languages, that simply download a portable Deno runtime and then run the rest of the file (which is the original Javascript code) as Javascript?
So, you basically still have an install step, but it was just automated to work cross-platform though what’s basically a polyglot install script. Meaning that this could probably be done with almost any other language, assuming it has a portable runtime - such as portable python and similar, is that correct?
Mozilla won’t implement WEI
They are going to fight against WEI. Tooth and nails, for our sakes!
Just like they did with EME, the closed source video DRM in 2014. By being deeply concerned with the direction the web is going, and definitely against it, but…
We face a choice between a feature our users want and the degree to which that feature can be built to embody user control and privacy.
With most competing browsers and the content industry embracing the W3C EME specification, Mozilla has little choice but to implement EME as well so our users can continue to access all content they want to enjoy.
Despite our dislike of DRM, we have come to believe Firefox needs to provide a mechanism for people to watch DRM-controlled content.
DRM requires closed systems to operate as currently required and is designed to remove user control, so Mozilla is taking steps to find alternative solutions to DRM. But Mozilla also believes that until an alternative system is in place, Firefox users should be able to choose whether to interact with DRM in order to watch streaming videos in the browser.
https://blog.mozilla.org/en/mozilla/drm-and-the-challenge-of-serving-users/
https://hacks.mozilla.org/2014/05/reconciling-mozillas-mission-and-w3c-eme/
I’m avoiding google as much as I can, so this definitely isn’t for me. But, does anyone knows of any self-hosted similar solution? I’m already mostly working remotely on my desktop through Parsec, but having something like a FOSS web IDE running at home would be a little bit better solution for cases where the network speed/quality isn’t good enough to work for the whole streamed desktop case.
Ever since I’ve discovered Parsec (or any other remote desktop streaming solution that isn’t TeamViewer), I’ve switched from having to drag around a heavy laptop that still can barely run Unreal to just having a Surface, remotely WoL my desktop at home through a pooling solution that does not require any public facing service (my NAS is just pooling a website API for a trigger. Not efficient, but secure), and just connecting through Parsec.
RDP could also work I’d wager, but then I’d have to set up a VPN and I’m not really that comfortable with anything public facing. But if anyone asks me now for good laptop recommendations, I always recommend going the “better desktop for the same price, and small laptop for remote”.
I’ve yet to find a place where I couldn’t work comfortably through Parsec, it being optimized for gaming means the experience is pretty smooth, and it works pretty well even at lower network speeds. You still need at least 5-10Mbps, but if you have unlimited mobile data you’re good to go almost anywhere.
I’ve given it some though and wouldn’t the fact that the blockchain is public by design be a problem in regard to forward secrecy (I’m not sure I’m using the term right here, but I suppose you get the idea)? If your keys would leak, you are then stuck with a lot of private data leaked without any way how to pull them back.
I live in Europe and have some direct experience with how the banking system works (I was pentesting the system that shares transaction data between banks over their closed intranet), and I had no idea that US doesn’t have something like that. That’s interesting, that sounds like a lot of inconveniences.
A part of me is kind of looking forward to it. It may be the breaking point to finally reduce my internet usage and get to implementing the Digital Minimalism, because I feel so strongly against this kind of bullshit that I refuse to use any website that keeps telling me what I can and can’t do. Once I don’t have control over what sites I want to support with ads, or what sites can track me and collect data about me - I will simply stop using it.
I’ve been slowly getting used to the reduced user experience caused by privacy-focused approach. Reddit and Youtube has taught me to just look elsewhere instead of logging in when prompted, LibreFox has got me used to having to relog-in every time i switch tabs due to containers and cookie autodelete, and the subscription bullshit for every smart product has taught me to reflash and self-host devices I can, so I already have a NAS and pretty comfy infrastructure ready.
But I still get drawn to some social networks, or end up mindlessly procrastrinating by browsing the web. This will finally be something not under my control (I tried Cold Turkey - it never lasted long) that will keep me out of the internet for good. It doesn’t really add much value to my life, blog posts and youtube tutorials have been reduced to absolute basics without any value, most of them now even AI generated. If I want to learn something about a topic, it’s hard to find actually interresting content that isn’t the same basics tutorial for dummies made for people without attention span who don’t want details, but just to feel like they are doing something smart with their time.
Now that I think about, it’s been a long time since I’ve actually found something of value on the internet, the discussions here on Lemmy are one of the last few things that I find interesting to engage with. But I’m too used to it to be able to quit on my own accord, and this may just be the push I need, to finally go all-in into the Digital Minimalism.
I totally agree with this. And I think it actually shows a lot about people in general, and their attitude to life.
I totally understand how can someone arrive at a conclusion that unless you can monetize or fund something, it will eventually get nowhere. But that also says a lot about the person saying that, and unfortunately is pretty common - that just a mere though of doing something for free, or for others without any compensation is basically unimaginable, and people like that will never get it.
But then you have passionate people doing volunteers for free, or creating entire events for a subculture they love while at a loss or without any kind of compensation for their (large amounts) of time and work. I’m a part of few such projects, mostly as a DJ, and I always find it really weird and surprising when I’m reading though posts or comments related to DJing where hourly rate or how much should they ask for a first gig is such a common topic. It never crossed my mind, and the communities I’m helping with are all run by volunteers without any compensation, just because they are passionate for their subculture.
Because even if you’re working a day job, there is still a lot of free time left for you to offer into something you really care about. It’s understandable that some people don’t want to offer it to others for free (or can’t even imagine how someone would be willing to do that, and probably even think that they are stupid to do so), but I’m really glad that some people are willing to do that - and that’s what the FOSS community is about.
It’s always saddening when I hear someone say “You could be making so much money for that! Just monetize it a little…”, but it’s also a really good judge of character. People are people, I guess.
I’ve always just used Python for smaller tasks, mainly because of it’s popularity - which means it’s easy to quickly find example code or library for virtually any usecase you may have for such a script.
But I’ve lately started using Powershell a little bit more, because it just works on any Windows machine and you don’t need to install anything. And for apps that are more involved than a quick automation script or throwaway calculation, I just go with C# since that’s what I’m used to the most.
I’ve had the “pleasure” of having to work with Pharo, which is AFAIK based on Smalltalk, and it was one of the most frustrating experiences I ever had with a language. It was a few years ago so the details are blurry, but as far as I remember the idea was that the whole IDE is a basically a VM coded in Pharo that you can edit on the fly, and it was just a mess and super strange to work with.
On the other hand, it was a great learning experience because the OOP smalltalk syntax and way of thinking about your code is different enough to be worth experiencing. But I still can’t imagine a task for which using Pharo would be a good idea, or better than literally any other language.
I concur, VS Code is amazing, I’ve totally forgotten about that. TypeScript I haven’t really used much, so I can’t judge that, but now that I think about I do most of my work in C# and don’t have anything bad to say about it, so that’s also nice.
Most of O365 is fine, while still a little bit slower than anything else, but the Dynamics just isn’t responsive no matter what network or hardware I’m on. It’s a convoluted CRM. And I do have a new desktop that I bought a few months ago, and 1Gbps network, so none of that should be a factor.
I would rather see it just added to the standard definition as a valid character, so the compilation passes without issues. It would make lives of Greek programmers a little bit easier!
But on a second though, it would probably make the lives of compiler programmers a living hell. Having to deal with two symbols for one thing sounds really annoying :D
Almost every Microsoft product is horrible. Our company has switched from our internal backoffice for attendance and project time allocation tracking, which was kind of terrible but aside from some slight issues (the only one I can think of being that if you wanted to create a time report for a project you’re not on, enabling “all projects” pretty much froze the page for a few seconds, since it was loading hundreds of projects into a single selection box), to O365 Dynamics.
And oh my god it’s the worst UX I’ve EVER seen. Every click takes around 4-5 seconds before the page even loads. You can’t simply take project report from a single day, and copy it to the whole week. No, the only thing you can do is duplicate a single record, but you still have to manually change the date in like two fields - wich takes like 15 seconds since everything including a Data Picker takes ages.
It’s not intuitive in the slightest. You can’t just search for a project somehow easily, you have to open like a million of submenus to find what you need, and if something is missing you have to create a bazillion of different stuff for it before you can even track the time. It takes me literally around an hour to properly log what I was doing during the last month, and I only work part-time, so I only need to report like 10 days.
When we were switching the company was making a pitch about how it gives them more precise data about projects, but everyone I’ve talk to about it just stopped bothering with precisely recording every hour to a project it belongs, and just groups it up somewhat approximately. I can’t imagine having to properly log the 3-5 different projects I work on or am on a meeting for during a single day, just thinking about it makes me want to quit, so I just choose the project I was allocated to for the week and report whole 8 hours to that. Or just group a few hours of overtime into a single random day on a single project.
And it’s the same with Microsoft Azure. I’ve worked with both GCloud and AWS, and nothing was as slow and annoying to use as Azure - but mind you, that was several years ago, it may be better now. But I guess that’s the price you pay for having such a large amount of developers. I’ve heard that they have whole teams for a small subset of features on every products - such as a team responsible only for one screen. That has to be such a mess.
As someone who works in gamedev, I’m sure that some of the people there are passionate about it and it is gutwrenching to see your work fail so hard. I’m sad for every project that launches after years of work and fails to get any attention or sales, and I’m definitely sure there’s someone losing sleep due to that.
I never worked in super-large projects, but I did work for a AAA studio and even there, you got people invested into the project.
From how I’ve seen it, you wouldn’t work in gamedev unless you are passionate about it, because you can get drastically better pay for the same job in other, more business focused, industries. So, if all you cared about is money, you have better options.