• 0 Posts
  • 35 Comments
Joined 1Y ago
cake
Cake day: Jul 02, 2023

help-circle
rss

I have a small 6U rack in my hallway which is where all the server stuff sits. There are 1U UPS units, but I haven’t had the need for it yet. However after replacing motherboard on this current machine I forgot to turn on option for auto start after power failure. My servers are mostly for collecting data regarding temperature, humidity and other metrics around the house, glass house and other parts. Same machine also collects surveillance data from cameras around the property which detect human and animal shapes.

So since machine rarely does long term calculations or data processing it’s okay that it doesn’t have UPS, since no data would be coming anyway without power.



While true, Git also supports symlinks, so nothing is stopping you from having modules/ directory or something similar and then in link part of it elsewhere in your project.


Hardware is complex and mysterious enough without added complexity of an esoteric language.


Do people still think, after all this time and so many different languages, that there will be one language to rule them all? I mean technically you can drive nails with a rock, but you don’t see a carpenter using one. Right tool for the job. Always was, always will be.


Can be compressed very efficiently. I do dread the thought of writing a driver in brainfuck.


More to the point it refers to relation between elements and not the political correctness of the act. Just how the terminology is used in books, but reading one doesn’t imply you are a racist or condone slavery.


For a while, yes, you had to. Every new repo would be main while old ones remained master. Tools that default to a specific branch aside now you had to remember and check which branch you are merging into every time.


That’s like forcing people to have different color shoe laces and calling it good practice. In reality it changed nothing but forced a lot of people to work on solving issues with their scripts and automation tools for the sake of change instead of spending that time on writing actual code and fixing bugs.


This, sooo much this! People don’t realize that this change created a lot of unnecessary work to a lot of developers for no other reason than PR or to act smug about it. They solved slavery problem by renaming master to main equally well as they solved homophobia and transphobia by allowing people to specify pronouns on their profiles. Who the hell cares if you identify as tree sap. However many do care if your code sucks or doesn’t follow coding style.


I can’t wait for asshats to start calling for gender fluid connectors. What’s that, male 3.5mm connector, did you just assume my connectors identity?


How can you? I would understand if you have to, but Mercurial/Git approach is so much more flexible.


This is a huge pain in the ass for us as well. We have some automation with development environment and deployment of certain scripts. We had to redo a good chunk of them to first test whether there’s main or master. And it took us a long time to find stragglers that weren’t as frequently updated but would suddenly break deploymend after minor changes.


Had to refresh my memory, it’s been a while. They didn’t change branch on existing projects, but they did change it on new repos to main by default. Our tools indeed created repositories and configured everything for the developer automatically. However GitHub’s policy meant that you had to either change the tools to detect whether they are working with old repo or new, or go to every new project after automatic configuration fails, configure default branch and then rerun the tool. Same thing then happened to few of our tools that were used for CI.

All in all they made more work for us for no reason other than be smug about it and it changed exactly nothing.


They forced the change. If I wanted otherwise, I had to go and specify per project that master was the default branch, and there were many of those. And whole “insanely fragile” is just nonsense or are you trying to tell me people have conditions and scripts that detects what’s the default branch and use that instead of assuming default name that hasn’t changed for 15 years would remain default?

Whether you like Linus or not, whatever is released to users stops being a bug and becomes a feature. Not breaking user-space is a must. Instead they achieved nothing and caused a lot of unnecessary work to a lot of developers.


Oh, how upset I was by that decision. I still call out GitHub online every now and then thanking them for solving slavery by messing up my deployment scripts and development environments.


Just use brainfuck for everything. The entry barrier for the programming industry needs to be higher anyway.


Like u/MrMcGasion said, zeroing makes it easier to recover original data. Data storage and signal processing is pretty much a game of threshold values. From digital world you might see 0 or 1, but in reality it’s a charge on a certain scale, lets assume 0 to 100%. Anything above 60% would be considered 1 and anything below 45% a 0. Or something like that.

When you do zero the drive, that means drive will reduce charge enough to pass the lower limit, but it will not be 0 on any account. With custom firmware or special tools it is possible to configure this threshold and all of the sudden it is as if your data was never removed. Add to this situation existence of checksums and total removal of data becomes a real challenge. Hence why all these tools do more than one operation to make sure data is really zeroed or removed.

For this reason random data is better approach is much better than zeroing because random data alters each block differently instead of just reducing charge by a fixed amount, as it is with zeroing. Additional safety is achieved by multiple random data writes.

All of this plays a role only on magnetic storage, that is to say HDDs. SSD is a completely different beast and wiping SSD can lead to reduced lifespan of the drive without actually achieving the desired result. SSDs have write distribution algorithms which make sure each of the blocks are equally used. So while your computer thinks it’s writing something at the beginning of the drive, in reality that block can be anywhere on the device and address is just internally translated to real one.


No need, RaspberryPi has been avoiding us. Finding to purchase one has become a tiresome errand.


Purely technically speaking you can fit all of wireless bands into a single fiber optic and have room to spare. Then you can run fiber in parallel.


Technically they can handle 300 clients, if none of them are talking. With any wireless communication, only one device can talk at a time, maybe two if sending and receiving works on different frequency, which WIFI is not. So no matter what manufacturer says, on 2.4GHz, fewer clients can talk because bandwidth is lower and sending/receiving packets takes time. Whenever possible, stay away from WIFI. The more you use it, the worse it will get.


Love the fact community is already mocking the fact they have distribution issues. While I had Twitter account their PR team was going full force demonstrating how it can be used and promoting projects that use it… all the while it’s out of stock everywhere, constantly. I would have number of sites “notify” me when they are back in stock, only to be sold out seconds after. Luckily kind person shared a site which tracks where it can be purchased and for what amount but the mere fact such a tool has to exist just shows there’s a serious problem.


So the current benefit is: it’s small? At which point run tablets. :)


Had this been emacs it would have been funny. But with Vim you don’t remember key bindings. Vim has operations and motions. Few od each and they are combined.


Matrix is anything but great. Tried using it for months, forced my employees to use it for business communication. Worst decision I’ve made of late. Messages would get delayed or never arrive. Frequent issues with clients. Server drops. Etc. Gave up on it long time ago.


I love IRC. Love its simplicity and instantaneous nature messages. Nothing feels as realtime chat like IRC does. It’s also dead simple to implement and self-host. Only downside is iffy file transfers which don’t work unless you have public IP. Inline images would be useful. Perhaps time is ripe for IRC+ protocol. Add few extensions and you are good.


Matrix is not even close to IRC. What makes you think like that?


I purchase cheap Anran branded IP cameras. So far they’ve been meh, but they don’t require application to set up. They do support ONVIF but you have to configure it through web interface. For me that’s good enough. Configure once, ban internet access since no Chinese stuff is ever getting access to internet from my network.

Whether they will work with mentioned software, I have no idea. I run my own form of software for security cameras by basically implementing FTP server, where most recent events are stored in Redis and hi resolution image is stored elsewhere on hard disk.


I go with main character names from good anime. So Kusanagi, Vash, Lelouch, Kakashi, etc.


Okay, so manu of these answers are just plain wrong. In short, you shouldn’t care as the biggest impact will be to network admins. They are the ones who have to configure routing and handle everything else that comes with new addresses. The rest of the world simply doesn’t know or notice whether they are using IPv4 or v6. Business as usual.

If the question is whether you should play with it at home. Sure thing if you have the desire to. It’s the future and only a matter of time before it becomes a reality. Said network admins and ISPs have been delaying the transition since they are the ones who have to work it out and putting your entire user base behind single IPv4 NAT is simpler than moving everything to IPv6.

From network admin perspective, yes it’s worth moving to IPv6 since network topology becomes far simpler with it. Fewer sub-networks, and routing rules to handle those. Less hardware to handle NAT and other stuff. Problem is, they made the bed for themselves and switching to IPv6 becomes harder the more you delay it. Number of users in past 10 years or so has skyrocketed. Easily quadrupled. We use to have home computers with dial-up. Easy enough, assign IP when you connect, release it on disconnect. Then broadband came and everyone is sitting online 100% of the time. Then mobile phones which are also online 100% of the time. Then smart devices, now cars and other devices start having public internet access, etc. As number of users increases, network admins keep adding complexity to their networks to handle them. If you don’t have public IP, just do traceroute and see how many internal network hops you have.


There’s no “should in theory”. It’s only a possibility due to sheer number of possible combinations. No one was ever going to make every device public. It makes absolutely no sense. Why would your company’s printer be online or isolated networks or VPNs? There’s no point.


Haha, no not really. IPv6 has the ability to provide public IP address for each device, but that doesn’t mean it will have to. Other than number of possible addresses, nothing is different. Routing, firewalls, NATs, etc. All remains the same.


I do remember there was some drama, but to be honest I never followed them nor do I follow now. Saw the video some days ago, found hardware presented interesting and shared that. That about sums it up.


Point is not who made it, but the PC. Here’s the pure link since clicking on video description was too hard: https://www.bee-link.com/catalog/product/index?id=493