• 0 Posts
  • 34 Comments
Joined 1Y ago
cake
Cake day: Jul 02, 2023

help-circle
rss

I too have forgotten to memset my structs in c++ tensorflow after prototyping in python.


I don’t think either is actually true. I know many programmers who can fix a problem once the bug is identified but wouldn’t be able to find it themselves nor would they be able to determine if a bug is exploitable without significant coaching.

Exploit finding is a specific skill set that requires thinking about multiple levels of abstraction simultaneously (or intentionally methodically). I have found that most programmers simply don’t do this.

I think the definition of “good” comes into play here, because the vast majority of programmers need to dependably discover solutions to problems that other people find. Ingenuity and multilevel abstract thinking are not critically important and many of these engineers who reliably fix problems without hand holding are good engineers in my book.

I suppose that it could be argued that finding the source of a bug from a bug report requires detective skills, but even this is mostly guided inspection with modern tooling.



Sorry. I apologize.

It’s frustrating trying to explain the same thing over and over again…

The tokens are how drm works. The process of DRM is token validation and enforcement of intellectual property rights granted by tokens.

I don’t know how else to explain it. It feels like I am back at my original post. I don’t know if you understand any better or if you still have misconceptions about what NFTs are or what DRM is or if you still think there is some magic in NFTs.


Again, all of this already existed and will continue to exist with or without blockchain. There is very little novel in the implementation details of the tokens. The people who got the idea for "nft"s didn’t come up with a new idea. This isn’t some new math. The only portion of NFTs that is new is the cooperative signing… Which again, isn’t a new concept either.

Right now, everything you described… Literally all of it… Ubisoft implements for their launcher and enforce with their drm solution.


Nfts, digital tokens, already exist. Their use, in the protection of copyright, is called drm. “Nfts” bring nothing new to the table of digital rights or copyright… And a whole host of stupidity.


Without knowing the exact model it’s difficult to know for certain but you can buy off brand refill kits with chips. The printer may intentionally degrade quality with the aftermarket chips (and may never reset itself even if you return to official toner)… HP is just a terrible company.


It depends on what you are trying to do… There are many tunnel / reverse proxy routing services like https://www.cloudflare.com/products/tunnel/

Here’s a list https://github.com/anderspitman/awesome-tunneling

You can also get a super cheap vps, do some ssh reverse tunnel magic and go along with your day.


One of my favorites is the fast inverse square solution.

It’s like Fermat’s Little theorem: meh, this is easy fuck you.

The rest of the world: what in the ever loving fuck is going on here? How in the… Jesus Christ… How did you?!? What is this black magic??? What part of your soul did you sell for this?


I’ll answer in a couple of different ways.

  1. If I am writing library code my why is you have an end use and I don’t care why you use it and you don’t care why I wrote it. You only care about what my code does so you can achieve your why.

  2. If we are working on the same code we have different whys but the same what. Then your comment as to why isn’t the same as mine which makes the comment incorrect.

  3. We are looking at a piece of code and you want to know how it works, because the stated what is wrong (bugs). This might be the “why” you are looking for, but I call this a “how”. This is the case where self documenting code is most important. Code should tell a second programmer how the code achieves the what without needing an additional set of verbose comments. The great thing about code is that it is literally the instructions on the how. The problem is conveying the how to other programmers.

There are three kinds of how: self evident, complex how’s requiring multiple levels of abstraction and lots of code and complex short how’s that are not apparent.

The third is where most people get into trouble. Almost all of these cases of complexity can be solved with only a single layer of abstraction and achieve easily readable self documenting code. The problem for many cases is that they start as a one off and people are lousy at putting in the work on a one-off solution. Sometimes the added work of abstraction, and building a performant abstraction, makes a small task a large one. In these cases comments can make sense.

Sometimes these short, complex how’s require specialists. Database queries, performant perl/functional queries, algorithmic operations, complex compile time optimized templates (or other language specific optimizations) and the like are some of the most common examples of these. This category of problem benefits most from a well defined interface with examples for use (which might be comments). The “how” of these are not as valuable for the average developer and often require specialist knowledge regardless of comments for understanding how they work. In these cases what they do is far more valuable than how or why.


See, I think length limits and readability are sometimes at odds. To say that you 100% believe in length limits means that you would prefer the length limit over a readable line of code in those situations.

I agree that shorter lines are often more readable. I also think artificial limits on length are crazy. Guidelines, fine. Verbosity for the sake of verbosity isn’t valuable… But to say never is a huge stretch. There are always those weird edge cases that everyone hates.


This is a pretty ridiculous position to take but if you believe it then I’m glad you write the comments you do.

There is an argument that commenting on the lack of expected code is valuable for this reason, but it certainly isn’t true in all situations.


The most important thing is comprehension. If something is too long and the length makes it less readable then it is too long.

But if having 3-4 files open at the same time makes it harder for you to comprehend a single file because you can’t get the full picture, that’s on you.


I understand the concern, but readability and comprehension are way more important than line length. If the length impairs readability, it’s too long. Explicitly limits are terrible. Guidelines, fine.

Ultimately, you do you. I still think your crazy and I think your argument is poor.


I too used to think generics were superior until I learned parameter packs, type traits and SFINAE.


I, too, remember the days before ultra high definition ultra wide monitors.

I thought this argument was bogus in the 90s on a 21" CRT and the argument has gotten even less valid since then. There are so many solutions to these problems that increase productivity for paltry sums of money it’s insane to me that companies don’t immediately purchase these for all developers.


I used to be this way about c++ too… But c++17/22 are not the same language as it was 10 years ago… And it definitely isn’t the language most firmware guys get to use it as.

There is some truly wild shit in the templating system.


Not that you are wrong, but it was super weird to read that “python can be emulated in c”.

I mean yes… But…


Your mistake was giving them an answer instead of asking how the scale was setup before giving them a number. Psychologically, by answering first your established that the question was valid as presented and it anchored their expectations as the ones you had to live up to. By questioning it you get to anchor your response to a different point.

Sometimes questions like this can be used to see how effective a person will be in certain lead roles. Recognizing, explaining and disambiguating the trap question is a valuable lead skill in some roles. Not all mind you… And maybe not ones most people would want.

But most likely you dodged a bullet.


I agree only when your job function is specifically geared around those tools… Otherwise high quality guis are more valuable.

Just because I can do everything in gdb that I can do in visual studio doesn’t mean 99% of most debugging tasks isn’t easier and faster in visual studio. Now if my job was specifically aimed at debugging/reverse engineering there are certain things that gdb does better on the CLI… But for most software devs… CLI gdb isn’t valuable.


Self documenting code is infinitely more valuable than comments because then code spreads with it’s use, whereas the comments stay behind.

I got roasted at my company when I first joined because my naming conventions are a little extra. That lasted for about 2 months before people started to see the difference in legibility as the code started to change.

One of the things I tell my juniors is, “this isn’t the 80s. There isn’t an 80 character line limit. The computer doesn’t benefit from your short variable names. I should be able to read most lines of code as a single non-compound sentence in English with only minor tweaks and the English sentence should be what is happening in most of those lines of code.”


Sadly only the compiler will know the true value of my constexpr forLove;


Lol. If you have access to scene releases that aren’t on a private tracker you will probably get an immediate VIP slot.


  1. Use your existing fork of ‘cooler-stufff’

Everything else is the same.

Edit: you should actually be able to make a new repo and just file your three steps… Give it a try.


You can add the even-cooler-stuff as another remote repo(like origin) and grab those changes and branch off of one of is branches then you can make pull requests to even cooler stuff from those branches.

https://stackoverflow.com/questions/7244321/how-do-i-update-or-sync-a-forked-repository-on-github

I’m pretty confident the reason GitHub isn’t allowing you to fork the even-cooler-stuff repo is that technically they are the same repo… And multiple remotes should do the trick.


It’s called split tunnel. You can choose which apps go through the von and which ones don’t.


It obviously isn’t true that people motivated by money build inferior products… There may be a loophole here where you can claim that the absolute best of a category might be built by an individual driven only by the desire to create, but I feel like that is a shitty argument. I would argue that the vast majority of quality products are only produced by those who seek monetary compensation.


Most schools that require Matlab in the US provide it for free to their students via their student license servers… It’s practically free for the university if they have any sort of research program at all.

Although, this might have changed… During my time a rotating student license only cost the university like 15 bucks and the university only needed enough for one class at a time usually.


I think the confusion/difficulty is the mistake that the PDF rendering is happening client side. I don’t know this for certain since I haven’t spent any time trying to break it, but based on the solutions I have found online leads me to believe that these view only PDFs are server side rendered and what is sent to your browser is only an image.

PDF is a weird file format… It is sometimes just a bunch of jpeg images of pages (scanners that don’t do ocr generate PDFs this way) And the PDF isn’t anything more than a collection of jpeg images… Or it can be a fully text based document using a proprietary rendering language that needs to be rendered to be viewed… Or it’s a series of printer commands that would tell a printer how to print it…

In any case PDF viewers are super complex (basically they need to know how to render all of those different kinds of instructions into a standard document for viewing) and often times they are implemented as image generators (because basically that’s what they are, it’s also why some PDF viewers don’t have text search or form filling, and it’s part of why PDF editors are so complex). The result of this is that it’s possible that the Google view of the PDF isn’t a PDF document… And only the server side rendering of it which means that when the view only option is enabled… There is no PDF to download. You aren’t looking at the PDF file. You are looking at the rendering result of the PDF viewer running on a Google server.

In this case you can’t download the PDFs… Your best option is to take screen capture of the pages, and run ocr on them.

Basically Google servers are printing the PDF to your screen. You dumb scan it, which generates a PDF that is a collection of jpeg images, then you ocr it, which generates a text version of the PDF.

Those js script snippets literally are a dumb scanner for your screen… That make a PDF from a collection of jpeg images.

Kinda nuts.



People will want to download chicks or parts of the torrent, you should leave them separate.



It’s possible that ownership/group is wrong. Is there a reason you used cp -a. Instead of rsync -a? The rsync version is a much closer duplicate than the cp version.

Edit: also if your base folder has different permissions that you are mounting into docker are different permissions this can happen.


Why doesn’t the program work? Because I wrote it that way. Why would you write it that way? Because I am a fucking moron, apparently.