That’s like forcing people to have different color shoe laces and calling it good practice. In reality it changed nothing but forced a lot of people to work on solving issues with their scripts and automation tools for the sake of change instead of spending that time on writing actual code and fixing bugs.
This, sooo much this! People don’t realize that this change created a lot of unnecessary work to a lot of developers for no other reason than PR or to act smug about it. They solved slavery problem by renaming master
to main
equally well as they solved homophobia and transphobia by allowing people to specify pronouns on their profiles. Who the hell cares if you identify as tree sap. However many do care if your code sucks or doesn’t follow coding style.
This is a huge pain in the ass for us as well. We have some automation with development environment and deployment of certain scripts. We had to redo a good chunk of them to first test whether there’s main
or master
. And it took us a long time to find stragglers that weren’t as frequently updated but would suddenly break deploymend after minor changes.
Had to refresh my memory, it’s been a while. They didn’t change branch on existing projects, but they did change it on new repos to main by default. Our tools indeed created repositories and configured everything for the developer automatically. However GitHub’s policy meant that you had to either change the tools to detect whether they are working with old repo or new, or go to every new project after automatic configuration fails, configure default branch and then rerun the tool. Same thing then happened to few of our tools that were used for CI.
All in all they made more work for us for no reason other than be smug about it and it changed exactly nothing.
They forced the change. If I wanted otherwise, I had to go and specify per project that master was the default branch, and there were many of those. And whole “insanely fragile” is just nonsense or are you trying to tell me people have conditions and scripts that detects what’s the default branch and use that instead of assuming default name that hasn’t changed for 15 years would remain default?
Whether you like Linus or not, whatever is released to users stops being a bug and becomes a feature. Not breaking user-space is a must. Instead they achieved nothing and caused a lot of unnecessary work to a lot of developers.
Like u/MrMcGasion said, zeroing makes it easier to recover original data. Data storage and signal processing is pretty much a game of threshold values. From digital world you might see 0 or 1, but in reality it’s a charge on a certain scale, lets assume 0 to 100%. Anything above 60% would be considered 1 and anything below 45% a 0. Or something like that.
When you do zero the drive, that means drive will reduce charge enough to pass the lower limit, but it will not be 0 on any account. With custom firmware or special tools it is possible to configure this threshold and all of the sudden it is as if your data was never removed. Add to this situation existence of checksums and total removal of data becomes a real challenge. Hence why all these tools do more than one operation to make sure data is really zeroed or removed.
For this reason random data is better approach is much better than zeroing because random data alters each block differently instead of just reducing charge by a fixed amount, as it is with zeroing. Additional safety is achieved by multiple random data writes.
All of this plays a role only on magnetic storage, that is to say HDDs. SSD is a completely different beast and wiping SSD can lead to reduced lifespan of the drive without actually achieving the desired result. SSDs have write distribution algorithms which make sure each of the blocks are equally used. So while your computer thinks it’s writing something at the beginning of the drive, in reality that block can be anywhere on the device and address is just internally translated to real one.
Technically they can handle 300 clients, if none of them are talking. With any wireless communication, only one device can talk at a time, maybe two if sending and receiving works on different frequency, which WIFI is not. So no matter what manufacturer says, on 2.4GHz, fewer clients can talk because bandwidth is lower and sending/receiving packets takes time. Whenever possible, stay away from WIFI. The more you use it, the worse it will get.
Love the fact community is already mocking the fact they have distribution issues. While I had Twitter account their PR team was going full force demonstrating how it can be used and promoting projects that use it… all the while it’s out of stock everywhere, constantly. I would have number of sites “notify” me when they are back in stock, only to be sold out seconds after. Luckily kind person shared a site which tracks where it can be purchased and for what amount but the mere fact such a tool has to exist just shows there’s a serious problem.
I love IRC. Love its simplicity and instantaneous nature messages. Nothing feels as realtime chat like IRC does. It’s also dead simple to implement and self-host. Only downside is iffy file transfers which don’t work unless you have public IP. Inline images would be useful. Perhaps time is ripe for IRC+ protocol. Add few extensions and you are good.
I purchase cheap Anran branded IP cameras. So far they’ve been meh, but they don’t require application to set up. They do support ONVIF but you have to configure it through web interface. For me that’s good enough. Configure once, ban internet access since no Chinese stuff is ever getting access to internet from my network.
Whether they will work with mentioned software, I have no idea. I run my own form of software for security cameras by basically implementing FTP server, where most recent events are stored in Redis and hi resolution image is stored elsewhere on hard disk.
Okay, so manu of these answers are just plain wrong. In short, you shouldn’t care as the biggest impact will be to network admins. They are the ones who have to configure routing and handle everything else that comes with new addresses. The rest of the world simply doesn’t know or notice whether they are using IPv4 or v6. Business as usual.
If the question is whether you should play with it at home. Sure thing if you have the desire to. It’s the future and only a matter of time before it becomes a reality. Said network admins and ISPs have been delaying the transition since they are the ones who have to work it out and putting your entire user base behind single IPv4 NAT is simpler than moving everything to IPv6.
From network admin perspective, yes it’s worth moving to IPv6 since network topology becomes far simpler with it. Fewer sub-networks, and routing rules to handle those. Less hardware to handle NAT and other stuff. Problem is, they made the bed for themselves and switching to IPv6 becomes harder the more you delay it. Number of users in past 10 years or so has skyrocketed. Easily quadrupled. We use to have home computers with dial-up. Easy enough, assign IP when you connect, release it on disconnect. Then broadband came and everyone is sitting online 100% of the time. Then mobile phones which are also online 100% of the time. Then smart devices, now cars and other devices start having public internet access, etc. As number of users increases, network admins keep adding complexity to their networks to handle them. If you don’t have public IP, just do traceroute
and see how many internal network hops you have.
Point is not who made it, but the PC. Here’s the pure link since clicking on video description was too hard: https://www.bee-link.com/catalog/product/index?id=493
I have a small 6U rack in my hallway which is where all the server stuff sits. There are 1U UPS units, but I haven’t had the need for it yet. However after replacing motherboard on this current machine I forgot to turn on option for auto start after power failure. My servers are mostly for collecting data regarding temperature, humidity and other metrics around the house, glass house and other parts. Same machine also collects surveillance data from cameras around the property which detect human and animal shapes.
So since machine rarely does long term calculations or data processing it’s okay that it doesn’t have UPS, since no data would be coming anyway without power.