So you just asked the most confusing thing about AWS service names due to how names changed over time.
Before S3 had an archival tier, there existed a separate service that AWS named AWS Glacier Storage, and then renamed to AWS S3 Glacier.
Around 2012 AWS started adding tiers to S3 which made the standalone service redundant. I received you look at S3 proper unless you have something like a Synology that can directly integrate with the older job based API used by the original glacier service.
So, let’s say I have a 1TB archival file, single tarball, and I upload it to a brand new S3 bucket, without version, special features, etc, except it has a life cycle policy to move objects from S3 standard to S3 Glacier instant access after 0 days. So effectively, I upload the file and it moves to Glacier class storage.
The S3 standard is ~$24/tb/month, and lets say worst case scenario our data sits on standard for one whole day before moving.
$0.77+$0.005 (API cost of the put)
Then there is the lifecycle charge to move the data from standard to glacier, with one request per object each way. Since we only have one object the cost is
$0.004 out of standard
$0.02 into glacier
The cost of glacier instant tier is $4.1/tb/month. Since we would be there all but one day, the cost on the first bill would be:
$3.95
The second month onwards you would pay just the $4.1/month unless you are constantly adding or removing.
Let’s say six months later you download your 1tb archive file. That would incur a cost of up to $30.
Now I know that seems complicated and expensive. It is, because it is providing services to me in my former role as director of engineering, with complex needs and budgets to pay for stuff. It doesn’t make sense as a large-scale backup of personal data, unless you also want to leverage other AWS services, or you are truly just dumping the data away and will likely never need to retrieve it.
S3 is great for complying with HIPAA, feeding data into a cdn, and generally dumping data around in performant way. I’ve literally dropped a petabyte off data into S3 and it just took it and did its thing.
In my personal AWS account I use S3 as a place to dump cache contents built by lambda functions and served up by API gateway. Doing stuff like that is super cheap. I also use private git repos (code commit), private container registry (ecr), and container host (ECS), and it is nice have all of that stuff just click together.
For backing up my personal computer, I use iDrive personal and OneDrive, where I don’t have to worry about the cost per object, etc. iDrive (not an Apple service) let’s you backup multiple devices to their platform and keeps them versioned.
Anyway, happy to help answer questions. Have a great day.
It’s complicated. I gave the most expensive pricing, which is their fastest tier and includes stripping across three availability zones and guarantees 11 nines of data durability. Additionally, the easy integration with all other AWS services and the feature richness of S3 buckets makes it hard to do a fair apple to apple comparison unless you really have well defined needs. So I gave the highest price to keep it simple, and for someone who says they just have a few GB, any cost should be trivial.
AWS S3 has a free tier that covers the first 5Gb. I recommend it because the AWS cli is excellent, and gives you lots of options for how to sync your data. The pricing is $0.023/GB/month after the free tier. It can be overwhelming to get into AWS but it is worth it to have access to the ultimate IT service swiss army knife.
I run a lot of tech, containerized workloads in AWS, home firewalls running on protectli boxes for all my family around the country, wireless controllers to run APs for my family around the country, but as I got older one thing I stopped rolling my own instance of was data backups. My data backs up to OneDrive and iDrive, so two copies of my data. My wife has access to both via shared credentials in a 1password folder that she knows how to access and uses regularly.
As I got older and I had a family, the pictures of our kids, wills, financial records, insurance documents are all just too important. Every service that holds my data is paid annually for less than $200/year total and auto renews. She could call either company and prove ownership if she ever did need help getting access. Also, I can easily share folders to her.
It’s funny how getting older makes you think of the sorts of issues enterprise teams have. Don’t implement solutions where you will be one deep, have a succession plan, and complexity is the enemy. All the tech I run now is fun and helpful, but can be replaced with a trip to BestBuy. The data and pictures however must be easy to retrieve for her.
So I don’t have a good self hosted solution for you other than to say that at some point it’s ok to change your strategy. And if you are worried about privacy, you can encrypt subsets of your data locally before it is backed up.
I got into computers at a young age in the early 90s. You couldn’t really do much without getting knowledgeable. I learned basic and then assembler to follow along with magazines that shipped game code for you to follow along with. I later went on to build my own 16 bit computer out of NAND gates, including ALU, wrote a rudimentary compiler, network stack, and OS, etc. Very primitive but functional. I really just wanted to figure out how it all worked through the full stack, and get my games working along the way.
I eventually learned more languages and launched a career in IT and moved through just about every role. Picked up a math degree along the way to help. Was a system programmer on an IBM zos mainframe using C, natural, and assembler. Was a.net developer for a while, an enterprise DBA, cloud and network engineer, and then eventually exited the technical career through management.
So I guess I just always was interested in how computers worked, and getting my games working. I left the technical roles one I felt I had figured out all that I really needed to and went on to other challenges. Still play games and tinker with my own projects though.
I would recommend people read the IAB ad blocker detection guide for Europe which provides a good summary of what is possible. It lays out the that depending on how the detection is done it might be defensible to rely on ToS, and to remove all risk, implement a consent banner, wall, or both.
Which is to say, even if it was ruled that YouTube can’t rely on ToS, which I don’t think is a sure thing, they would just have a consent wall like for cookies.
One thing I’ll throw in to help with dependencies is that if you add a games installer as a non-steam game, set proton experimental compatibly, and when you run it will install all the dependencies you need.
Then, after install, edit the non-steam game you created to point the path to the game executable. You can’t remove the game from steam for the installer and add a second one for the game because adding a non-steam game creates a steam managed folder that holds dependencies that will be deleted when you remove it. This you need to edit the game entry for the installer to point to the game executable inside that steam created folder.
Doing this I installed battle net, and then changed the path for the exec to the battle net launcher, and was able to play Blizzard games. For me I did it to get diablo 2 resurrected running for my kids on their steam decks, but I was super impressed by the proton compatibility layer.
I agree. Unfortunately many folks who are attracted to security issues and topics don’t have a great holistic view of things. The idea of security is that something can go wrong and you are still ok, and that you apply context appropriate measures. Of course sending a password through email isn’t good, but it’s a gaming forum. A security conscious individual should have randomly generated passwords for everything and no reuse. Likewise, it wasn’t a bank or a security company, it was an old forum software for public discussions, so contextually this isn’t a top concern.
The cherry on top is that it appears to have been an old screenshot and already addressed.
Right, I understand the distinction. What I’m saying is that at my credit union, I can report that a certified check has been lost. They have a waiting period of like 5 days and will then reissue the check. I mentioned my experience with the expired check because that is when I spoke to them about it.
No, my point is that the bank doesn’t need to be indemnified to cancel the first check and issue a second check. A certified check can be reported lost or stolen and reissued without a lot of fuss. It is the bank that holds the money drawn on for a certified account which they take out of your account. They haven’t sent it to escrow or something where there is risk of them being out 2x.
That’s odd. I had my bank issue a certified check to pay a contractor years ago. I forget what happened but they didn’t cash it within the expiration period so the bank cancelled it and returned the funds to my account. Generally a certified check just means the bank holds the funds separately from your account until the expiration date or it gets reported as lost or damaged. Or at least that is how my credit union handles them.
That’s interesting. Any chance your ISP could have been qos’ing streaming video? Although Singapore would be about the one place where a VPN concentrator would help; it is pretty much the big fiber hub in that local region for East, West, North connectivity.
I’ve only ever used Oracle cloud in an enterprise environment, so I don’t know what features you have available. I’m also much more familiar with AWS. But you should be able to create a proxy endpoint in your present region, and traverse the cloud providers internal network. That would likely improve your streaming. You could also create a VPN endpoint in your current region and terminate your traffic inside your cloud providers network, but that would add protocol overhead.
I would look at tools like iperf to look at your packet loss because being further from your server will increase latency, but shouldn’t impact the streaming unless you also have packet loss.
I used to oversee WAN and peering operations for a large multi site. Residential ISPs almost never respond to reports of inefficient routes unless you are one of their peers, big business customers, or you really know your stuff and send in a detailed report showing asymetric routes, bad bgp info, etc.
As far as a VPN goes, that probably wouldn’t help either. You will probably increase the number of hops and latency. Your route will still egress your isp gateway, to your VPN provider, then travel over the Internet and to your remote server, while adding additional protocol overhead. Yes, it is remotely possible that there is an improved link from his regional VPN node to his remote provider, but unlikely from my experience with traffic engineering.
I was an operations director in a prior role and oversaw the design and construction of several buildings. The last building was about $70 million, and we spent around $6 million on the design and programming.
What most folks don’t understand is the scale of minutiae. I’ve spent an entire day of meetings hashing out floor box standards between all parties (IT, facilities, design, construction). The amount of preliminary site studies, permit planning, etc, that goes into hundreds of miles of rail, plus stations, interesting into existing infrastructure etc… It’s significant.
I’ve also overwhelmed fiber builds, and have seen costs range upwards of $500k-$1m per mile of new fiber depending on if poles exist, or of trenching, right of way, permits, etc.
And all of this is just the tip of the iceberg for what goes into these plans.
I used to manage site licenses for a large university and these software companies really rake you over the coals. For example, Adobe and MatLab wouldn’t license software for just lab computers or to a subset of the student population. They required we purchase total headcount licenses that covered everyone at the institution. In the case of MatLab you also pick out about a dozen of the toolbox add-ons, so it becomes a difficult task of getting the faculty to rank sort all of the packages.
We ultimately ended up purchasing the licenses for the institution but I can understand an institution saying they can’t afford it and passing it on to the students in the classes that need it.
That’s a good takeaway. AWS is the ultimate Swiss army knife, but it is easy to misconfigure. Personally, when you are first learning AWS, I wouldn’t put more data in then you are willing to pay for on the most expensive tier. AWS also gives you options to set price alerts, so if you do start playing with it, spend the time to set cost alerts so you know when something is going awry.
Have a great day!