• 16 Posts
  • 22 Comments
Joined 1Y ago
cake
Cake day: Jul 28, 2023

help-circle
rss
> On Tuesday the co-founders announced that they have successfully raised $40 million in Series A funding and shared plans for their next two missions. AstroForge has now raised a total of $55 million to date. ... > However, Gialich said AstroForge learned a lot from this mission and is working toward launching a second spacecraft named Odin. This will be a rideshare payload on the Intuitive Machines-2 mission, which is due to launch during the fourth quarter of this year. If successful, the Odin mission would be spectacular. About seven months after launching, Odin will attempt to fly by a near-Earth, metallic-rich asteroid while capturing images and taking data—truly visiting terra incognita. Odin would also be the first private mission to fly by a body in the Solar System beyond the Moon. ... > On Tuesday, the company also announced plans for its third mission, Vestri (the company is naming its missions after Norse deities). This spacecraft will be about twice as large as Odin and is intended to return to the targeted metallic asteroid and dock with it. The docking mechanism is simple—since the asteroid is likely to be iron-rich, Vestri will use magnets to attach itself.
fedilink

I agree that the amount of work for many students can get quite out of hand and to be honest when I first started teaching, I was pretty guilty of having very work intensive courses.

That said, over the years, I’ve worked to streamline my courses to only have what I believe to be absolutely critical to learning and have added a lot of scaffolding and automated tests (for immediate results). In general, I try to have no busy work and make sure everything assignment is meaningful (as much as it can be anyway).

Additionally, because I understand that sometimes life happens, I have built-in facilities for automate extensions for assignments and even have a system for dropping certain homeworks.

This not to say that there isn’t work in my classes… it’s just that the work is intended to be relevant and reasonable, which most students seem to agree with these days.

I think students should be expected to work less over a longer period of time.

I think this would be a great idea. Or rather, I think it would be great to allow students to learn at different rates… some may want to go faster, some may want or need to go slower.

I think the modern course-based education system is often too rigid and not flexible enough to adequately accommodate the needs of students with different experience levels, resources, or constraints. Something like a Montessori model would be a lot better IMHO.


First off, 10 is an integer square root. Of 100.

Right, what I was trying to say is that 10 itself is not a perfect square. You cannot take the square root of 10 and get an integer (ie. 1, 4, 9, 16, 25, etc.).

I was told by multiple English teachers (including the head of the department) that I was a math student and should never attempt to write because I saw through the regurgitation assignments, didn’t agree with teacher assessments of what Dickens “was trying to do” and had zero interest in confirming their biases.

I think that is unfortunate and probably inappropriate. I try to avoid classifying students as particular types and generally try to encourage them whenever possible to pursue whatever their interests are (even if I disagree or don’t have the same interest myself).

College coursework on the whole is a waste of time reinventing wheels. I don’t need to spend a couple of weeks working up to “Hello, world!” in C and as such left CS as a major my first quarter at uni.

There is a reason for reinventing wheels; it is to understand why they are round and why they are so effective. To build the future, it helps to understand the past.

That said, perhaps the course was too slow for you, which is understandable… I frequently hear that about various classes (including ones I’ve taught).

But teachers do this shit every day, year after year, and we blindly say they’re doing important work even as they discourage people from finding their path and voice, because god forbid a 16-year-old challenges someone in their 50s.

Again, I think you’ve had an unfortunate experience and I think it’s a good thing to challenge your teachers. I certainly did when I was a student and I appreciate it now when students do that with me. I recognize that I am not perfect nor do I know everything. I make mistakes and can be wrong.

I wish you had a more supportive environment in secondary school and I have a better understanding of your perspective. Thanks for the dialogue.


Sure, some people acquire the capability through repetition. But all that matters in the end is if you are capable or not.

I guess the question is how do you develop that capability if you are cheating or using a tool to do things for you? If I use GrubHub to order food or pay someone else to cook for me, does it make sense to say I can cook? After all, I am capable of acquiring cooked food even though I didn’t actually do any of the work nor do I understand how to well, actually make food.

The how is relevant if you are trying to actually learn and develop skills, rather than simply getting something done.

No, the point is to get an irrelevant piece of paper that in the end doesn’t actually indicate a persons capabilities.

Perhaps the piece of paper doesn’t actually indicate a person’s capabilities in part because enough students cheat to the point where getting a degree is meaningless. I do not object to that assessment.

Look, I’m not arguing that schooling is perfect. It’s not. Far from it. All I am saying is that if your goal is to actually learn and grow in skill, development, and understanding, then there is no shortcut. You have to do the work.


Sure. If you do enough basic math, you start to see things like how 2/8 can be simplified to 1/4 or you recognize that 10 is not a perfect square root or how you could reorder some operations to make things easier (sorry, examples from my kids). Little things like that where you don’t even think about it… it becomes second nature to you and that makes you a lot faster because you are not worrying about those basic ideas or mechanics. Instead, you can think about more complicated things such as which formulas to apply or the process to compute something.

As another example, since I teach computer science, a lot of novice students struggle with basic programming language syntax… How exactly do you declare a variable? What order do things go? How does a for loop work? Do you need a semicolon or parentheses, etc. If you do enough programming, however, these things become second nature and you stop thinking about it. You just seemily, intuitively, know these things and do them naturally without thinking, even though when you first started, it was really complicated and daunting and you probably spent a lot of time constructing a single line of code.

Once you develop a foundation however, you don’t need to worry about these low-level things. Instead you worry about high-level issues such as how to organize larger pieces of code into functions or how to I utilize different paradigums, etc.

This is why a basketball player, for instance, will shoot thousands of shots in practice or why a piano player will play a piece over and over for many hours. It’s so they don’t have to think about the low-level mechanics. It becomes muscle memory and it’s just natural to them.

I hope that makes sense.


Thanks for the thoughtful response.

Using AI to answer a question is not necessarily preventing yourself from learning and developing mastery and understanding. The use of AI is a skill in the same way that any ability to look up information is a skill. But blindly putting information into an AI and copy/pasting the results is very different from using AI as a resource in a similar way one might use a book or an article as a resource.

I generally agree. That’s why I’m no longer banning AI in my courses. I’m allowing students to use AI to explain concepts, help debug, or as a reference. As a resource or learning aid, it’s fine or possibly even great for students.

However, I am not allowing students to generate solutions, because that is harmful and doesn’t help with learning. They still need to do the work and go through the process, AI assisted or not.

This is a particularly long winded way of pointing out something that’s always been true - the idea that you should learn how to do math in your head because ‘you won’t always have a calculator’ or that the idea that you need to understand how to do the problem in your head or how the calculator is working to understand the material is a false one and it’s one that erases the complexity of modern life. Practicing the process helps you learn a specific skill in a specific context and people who make use of existing systems to bypass the need of having that skill are not better or worse - they are simply training a different skill.

I disagree with your specific example here. You should learn to do math in your head because it helps develop intuition of the relationship between numbers and the various mathematical operations. Without a foundational understanding of how to do the basics manually, it becomes very difficult to tackle more complicated problems or challenges even with a calculator. Eventually, you do want to graduate to using a calculator because it is more efficient (and probably more accurate), but you will be able to use it much more effectively if you have a strong understanding numbers and how the various operations work.

Your overall point about how a tool is used being important is true and I agree that if used wisely, AI or any other tool can be a good thing. That said, from my experience, I find that many students will take the easy way out and do as you noted at the top: “blindly putting information into an AI and copy/pasting the results”.


The how is irrelevant.

What I usually tell students is that homework and projects are learning opportunities. The point isn’t for them to produce a particular artifact; it’s to go through the process and develop skills along the way. For instance, I do not need a program that can sort numbers… I can do that myself and there are a gazillion instances of that. However, students should do that assignment to practice learning how to code, how to debug, how to think through problems, and much more. The point isn’t the sorting program… it’s the process and experience.

How do you get better at say gymnastics? You do a bunch of exercises and skills, over and over.

How do you get better at say playing the guitar? You play a lot songs, over and over.

How do you get better at say writing? You write a lot, some good, some bad, over and over.

To get better at anything, you need to do the thing, a lot. You need to build intuition and muscle memory. Taking shortcuts prevents that and in the long run, hurts your learning and growth.

So viewing homeworks as just about the artifact you submit is missing the point and short-sighted. Cheating, whether using AI or not, is preventing yourself from learning and developing mastery and understanding.


Maybe. It is true that people who would have cheated in the past are now just using AI in addition to the previous means. But from my experience teaching, the number of students cheating is also increasing because of how prevalent AI has become and how easy it is to use it.

AI has made cheating more frictionless, which means that a student who might not have say used Chegg (requires some effort) or copied a friend (requires social interaction) in the past, can now just open a textbox and get a solution without much effort. LLMs have made cheating much easier, quicker, and safer (people regularly get caught using Chegg or copying other people, AI cheating can be much harder to detect). It is a huge temptation where the [short-term] benefits can greatly dwarf the risks.


Could be what communities you are subscribed to. I run a small instance with about 3ish users, and here are my stats after about 3 months as well:

9.5G ./pictrs
12G	 ./postgres
8.0K ./lemmy-ui

What version of lemmy are you using? A recent update also introduced some space savings in the database (I think).


My personal C coding style as of late 2023
> This has been a ground-breaking year for my C skills, and paradigm shifts in my technique has provoked me to reconsider my habits and coding style. It’s been my largest personal style change in years, so I’ve decided to take a snapshot of its current state and my reasoning. These changes have produced significant productive and organizational benefits, so while most is certainly subjective, it likely includes a few objective improvements. I’m not saying everyone should write C this way, and when I contribute code to a project I follow their local style. This is about what works well for me.
fedilink


It comes down to bridging. I use discord and slack via IRC bridges. I actually use slack a lot (for work), but primarily through irslackd. I do not use slack for anything outside of work and would prefer to keep it that way.

For discord, I primarily use it through bitlbee-discord. With this bridge/gateway, I can actually chat on different servers at the same time, so I wouldn’t mind this for different communities if I had to.

Matrix is last because I don’t really have a good briding solution for it and it just seems clunkier than the other two for me.


I would be less willing to contribute/participate in discussions if newer platforms such as discord, slack, or matrix are used. Of those three, I would prefer discord, then slack, then matrix.

As it is, I only use Slack for work, and mostly avoid discord and matrix except for a few mostly dead channels/servers.

I understand that this is not the mainstream view and that most people prefer the newer platforms, but personally, I am not a fan of them nor do I use them.


I’m fine with IRC (actually prefer it as I use it all the time).

I agree with others that a mailing list is more intimidating and more of a hassle, but if there is a web archive, I can live with that. It wouldn’t be my preference, but it wouldn’t be an insurmountable barrier (I have contributed to Alpine Linux in the past via their mailing list workflow).


Arena Allocator Tricks and Tips
> Over the past year I’ve refined my approach to arena allocation. With practice, it’s effective, simple, and fast; typically as easy to use as garbage collection but without the costs. Depending on need, an allocator can weigh just 7–25 lines of code — perfect when lacking a runtime. With the core details of my own technique settled, now is a good time to document and share lessons learned. This is certainly not the only way to approach arena allocation, but these are practices I’ve worked out to simplify programs and reduce mistakes. > An arena is a memory buffer and an offset into that buffer, initially zero. To allocate an object, grab a pointer at the offset, advance the offset by the size of the object, and return the pointer. There’s a little more to it, such as ensuring alignment and availability. We’ll get to that. Objects are not freed individually. Instead, groups of allocations are freed at once by restoring the offset to an earlier value. Without individual lifetimes, you don’t need to write destructors, nor do your programs need to walk data structures at run time to take them apart. You also no longer need to worry about memory leaks.
fedilink

[Archive](https://archive.ph/moqVT) > Perhaps nothing has defined higher education over the past two decades more than the rise of computer science and STEM. Since 2016, enrollment in undergraduate computer-science programs has increased nearly 49 percent. Meanwhile, humanities enrollments across the United States have withered at a clip—in some cases, shrinking entire departments to nonexistence. > But that was before the age of generative AI. ChatGPT and other chatbots can do more than compose full essays in an instant; they can also write lines of code in any number of programming languages. You can’t just type make me a video game into ChatGPT and get something that’s playable on the other end, but many programmers have now developed rudimentary smartphone apps coded by AI. In the ultimate irony, software engineers helped create AI, and now they are the American workers who think it will have the biggest impact on their livelihoods, according to a new survey from Pew Research Center. So much for learning to code. > Fiddling with the computer-science curriculum still might not be enough to maintain coding’s spot at the top of the higher-education hierarchy. “Prompt engineering,” which entails feeding phrases to large language models to make their responses more human-sounding, has already surfaced as a lucrative job option—and one perhaps better suited to English majors than computer-science grads. > The potential decline of “learn to code” doesn’t mean that the technologists are doomed to become the authors of their own obsolescence, nor that the English majors were right all along (I wish). Rather, the turmoil presented by AI could signal that exactly what students decide to major in is less important than an ability to think conceptually about the various problems that technology could help us solve.
fedilink


I think this is the author being humble. jmmv is a long time NetBSD and FreeBSD contributor (tmpfs, ATF, pkg_comp), has worked as a SRE at Google, and has been a developer on projects such as Bazel (build infrastructure). They probably know a thing or two about performance.

Regarding the overall point of the blog, I agree with jmmv. Big O is a measure of efficiency at scale, not a measure of performance.

As someone who teaches Data Structures and Systems Programming courses, I demonstrate this to students early on by showing them multiple solutions to a problem such as how to detect duplicates in a stream of input. After analyzing the time and space complexities of the different solutions, we run it the programs and measure the time. It turns out that the O(nlogn) version using sorting can beat out the O(n) version due to cache locality and how memory actually works.

Big O is a useful tool, but it doesn’t directly translate to performance. Understanding how systems work is a lot more useful and important if you really care about optimization and performance.


> Having a fast and responsive app is orthogonal to “knowing your big Os”. Unfortunately, most tech companies over-emphasize algorithms in interviews and downplay systems knowledge, and I believe that’s one reason behind sluggish apps and bloated systems. > I’ve seen this play out repeatedly. Interviewers ask a LeetCode-style coding question, which is then followed by the ritual of discussing time and memory complexity. Candidates ace the answers. But then… their “real” code suffers from subtle yet impactful performance problems.
fedilink

Yeah, this is what I do… I host a couple of ergo IRC servers and an instance of thelounge for those that want that interface (also offer gamja). Personally, I use weechat to connect to the server.


AI crap - Why ML will make the world worse, not better
> There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better. > What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots. > AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.
fedilink

Bun v0.8.0
> Bun is an incredibly fast JavaScript runtime, bundler, transpiler, and package manager — all in one. > Bun v0.8.0 adds debugger support, implements fetch streaming, and unblocks SvelteKit. ReadStream and WriteStream from node:tty are implemented, and .setRawMode() now works on process.stdin, unblocking several interactive CLI tools. Plus Node.js compatibility updates, bug fixes, stability improvements.
fedilink

Introducing Code Llama, a state-of-the-art large language model for coding
> Today, we are releasing Code Llama, a large language model (LLM) that can use text prompts to generate code. Code Llama is state-of-the-art for publicly available LLMs on code tasks, and has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software.
fedilink

I Don’t Use Exceptions in C++ Anymore
> Using exceptions in C++ desktop and server applications overall made sense to me. > As I expanded my usage of C++ into other domains, specifically embedded domains, I began to experience more compelling reasons not to use exceptions first-hand... From [lobste.rs](https://lobste.rs/s/m33j4d/i_don_t_use_exceptions_c_anymore)
fedilink

C and C++ Prioritize Performance over Correctness
From [Russ Cox](https://swtch.com/~rsc/) > Lumping both non-portable and buggy code into the same category was a mistake. As time has gone on, the way compilers treat undefined behavior has led to more and more unexpectedly broken programs, to the point where it is becoming difficult to tell whether any program will compile to the meaning in the original source. This post looks at a few examples and then tries to make some general observations. In particular, today's C and C++ prioritize performance to the clear detriment of correctness. > I am not claiming that anything should change about C and C++. I just want people to recognize that the current versions of these sacrifice correctness for performance. To some extent, all languages do this: there is almost always a tradeoff between performance and slower, safer implementations. Go has data races in part for performance reasons: we could have done everything by message copying or with a single global lock instead, but the performance wins of shared memory were too large to pass up. For C and C++, though, it seems no performance win is too small to trade against correctness.
fedilink

Familiarity (my client distro is Pop and is based on Ubuntu), and I like the LTS life cycle (predictable).

I do uninstall snaps, though, and mostly just use Docker for things. I could use Debian, but again, for me it was about familiarity and support (a lot more Ubuntu specific documentation).


You can escape the :

URLS  = https\://foo.example.com
URLS += https\://bar.example.com
URLS += https\://www.example.org

Do you have a searxng folder in the same folder as your docker-compose.yml? If so, perhaps it is not mounting inside the container properly.


I think this is missing an article link: https://www.phoronix.com/review/downfall

Downfall, or as Intel prefers to call it is GDS: Gather Data Sampling. GDS/Downfall affects the gather instruction with AVX2 and AVX-512 enabled processors. At least the latest-generation Intel CPUs are not affected but Tigerlake / Ice Lake back to Sandy Bridge is confirmed to be impacted. There is microcode mitigation available but it will be costly for AVX2/AVX-512 workloads with GATHER instructions in hot code-paths and thus widespread software exposure particularly for HPC and other compute-intensive workloads that have relied on AVX2/AVX-512 for better performance.

Rough day for CPU makers…

Update: Of course there is a dedicated page for it: https://downfall.page/


AMD “INCEPTION” CPU Vulnerability Disclosed
> [AMD-SB-7005 "Return Address Security Bulletin"](https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7005.html) outlines this new speculative side channel attack affecting recent EPYC and Ryzen processors. > AMD has received an external report titled ‘INCEPTION’, describing a new speculative side channel attack. AMD believes ‘Inception’ is only potentially exploitable locally, such as via downloaded malware, and recommends customers employ security best practices, including running up-to-date software and malware detection tools. AMD is not aware of any exploit of ‘Inception’ outside the research environment, at this time.
fedilink

Good RPC systems versus basic ‘RPC systems’
> In my entry on how HTTP has become the default, universal communication protocol, I mentioned that HTTP's conceptual model was simple enough that it was easy to view it (plus JSON) as an RPC (Remote Procedure Call) system. I saw some reactions that took issue with this (eg comments here), because HTTP (plus JSON) lacks a lot of features of real RPC systems. This is true, but I maintain that it's incomplete, because there's a difference between a good RPC system and something that people press into service to do RPC with.
fedilink

Zig 0.11.0 Released
> Zig is a general-purpose programming language and toolchain for maintaining robust, optimal, and reusable software. > This release features 8 months of work: changes from 269 different contributors, spread among 4457 commits. It is the début of Package Management.
fedilink

No, but basically jmp.chat takes over your phone number… it acts as your carrier for voice and SMS (similar to Google Voice). Maybe not exactly what you want.

From the FAQ:

You can use JMP to communicate with your contacts without them changing anything on their end, just like with any other telephone provider. JMP works wherever you have an Internet connection. JMP can be used alongside, or instead of, a traditional wireless carrier subscription.

The benefit of this is that you can receive voice and text on anything that can serve as a XMPP client.


You could consider using something like jmp.chat. It delivers SMS via XMPP (aka jabber), so you could self-host a XMPP server and receive SMS that way. It also has some support for MMS (group chat, media), but my experience with it was mixed (I used it for about 3-4 years).



This looks incredibly cool and fun. Would be interested trying to re-write some of the games myself when I have some free time.


AWS: IPv4 addresses cost too much, so you’re going to pay
> Cloud giant AWS will start charging customers for public IPv4 addresses from next year, claiming it is forced to do this because of the increasing scarcity of these and to encourage the use of IPv6 instead. > The update will come into effect on February 1, 2024, when AWS customers will see a charge of $0.005 (half a cent) per IP address per hour for all public IPv4 addresses. ... These charges will apply to all AWS services including EC2, Relational Database Service (RDS) database instances, Elastic Kubernetes Service (EKS) nodes, and will apply across all AWS regions, the company said.
fedilink

Also joined the club today :)