The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 0 Posts
  • 83 Comments
Joined 8M ago
cake
Cake day: Jan 12, 2024

help-circle
rss

I didn’t know that I needed to know about Jevons’ paradox. Such a simple but brilliant reasoning.

You’ll get less pollution and crash deaths if, instead of trying to improve cars, society improved transportation methods that compete with cars: walking, biking, public transport, so goes on. They either don’t show those issues, or show them in a meaningfully lower level.


Really my point is there are enough things to criticize about LLMs and people’s use of them, this seems like a really silly one to try and push.

The comment that you’re replying to is fairly specifically criticising the usage of the word “hallucination” to misrepresent the nature of the undesirable LLM output, in the context of people selling you stuff by what it is not.

It is not “pushing” another “thing to criticise about LLMs”. OK? I have my fair share of criticism against LLMs themselves, but that is not what I’m doing right now.

Continuing (and torturing) that analogy, […] max_int or small buffers.

When we extend analogies they often break in the process. That’s the case here.

Originally the analogy works because it shows a phony selling a product by what it is not. By making the phony to precompute 4*10¹² equations (a completely unrealistic situation), he stops being a phony to become a muppet doing things the hard way.

If it were the case that there had only been one case of a hallucination with LLMs, I think we could pretty safely call that a malfunction

If it happens 0.000001% of the time, I think we could still call it a malfunction and that it performs better than a lot of software.

Emphases mine. Those “ifs” represent a completely unrealistic situation, that does not show anything useful about the real situation.

We know that LLMs output “hallucinations” way more than just once, or 0.000001% of the time. They’re common enough to show you how LLMs work.


To make it worse decision makers - regardless of country - are typically old and clueless about “this computer stuff”. As such they literally don’t see the problem.


I did read the paper fully, but I’m going to comment mostly based on the challenged that the OP refers to.

My belief is that the article is accurate on highlighting that the Fediverse on its own is not enough to reclaim the internet. However, it’s still a step in the right direction and should be nurtured as such.

Discoverability as there is no central or unified index

Yes, discovery is harder within a federated platform than a centralised one. However the indices that we use don’t need to be “central” or “unified” - it’s completely fine if they’re decentralised and brought up by third parties, as long as people know about them.

Like Lemmy Explorer for example; it’s neither “central” nor “unified”, it’s simply a tool made by a third party, and yet it solves the issue nicely.

Complicated moderation efforts due to its decentralized nature

This implicit idea, that moderation efforts should be co-ordinated across a whole platform, quickly leads to unsatisfied people - either because they don’t feel safe or because they don’t feel like they can say what they think. Or both.

Let us not fool ourselves by falsely believing that moderation always boils down to “remove CSAM and Nazi” (i.e. “remove things that decent people universally consider as bad”). Different communities want to be moderated in different, sometimes mutually exclusive, ways. And that leads to decentralised moderation efforts.

In other words: “this is not a bug, this is a feature.”

[Note: the above is not an endorsement of Lemmy’s blatant lack of mod tools.]

Interoperability between instances of different types (e.g., Lemmy and Funkwhale)

Because yeah, the interoperability between Twitter, YouTube and Reddit is certainly better. /s

I’m being cheeky to highlight that, as problematic that the interoperability between instances of different types might be in the Fediverse, it’s still something that you don’t typically see in traditional media.

Concentration on a small number of large instances

Yes, user concentration into a few instances is a problem, as it gives the instance admins too much power. However, there’s considerably less room for those admins to act in a user-hostile way, before users pack their stuff up and migrate - because the cost of switching federated instances is smaller than the cost of switching non-federated platforms.

The risk of commercial capture by Big Tech

Besides what I said above, on the concentration of users, consider the fact that plenty Fediverse instances defederated Threads. What is this, if not the usage of the Fediverse features to resist commercial capture?


It gets worse, when you remember that there’s no dividing line between harmful and healthy content. Some content is always harmful, some is by default healthy, but there’s a huge gradient of content that needs to be consumed in small amounts - not doing it leads to alienation, and doing it too much leads to a cruel worldview.

This is doubly true when dealing with kids and adolescents. They need to know about the world, and that includes the nasty bits; but their worldviews are so malleable that, if all you show them is nasty bits, they normalise it inside their heads.

It’s all about temperance. And yet temperance is exactly the opposite of what those self-reinforcing algorithms do. If you engage too much with content showing nasty shit, the algo won’t show you cats being derps to “balance things out”. No, it’ll show you even more nasty shit.

It gets worse due to profiling, mentioned in the text. Splitting people into groups to dictate what they’re supposed to see leads to the creation of extremism.


In the light of the above, I think that both Kaung and Cai are missing the point.

Kaung believes that children+teens would be better if they stopped using smartphones; sorry but that’s stupid, it’s proposing to throw the baby out with the dirty bathtub water.

Cai on the other hand is proposing nothing but a band-aid. We don’t need companies to listen to teens to decide what we should be seeing; we need them to stop altogether deciding what teens and everyone else should be seeing.

Ah, and about porn, mentioned on the text: porn is at best a small example of a bigger issue, if not a red herring distracting people from the issue altogether.


I wouldn’t call pasting verbatim training data hallucination when it fits the prompt. It’s not necessarily making stuff up.

I’ve seen it being called hallucination plenty of times. Because the output is undesirable - even if it satisfies the prompt, it is not something you’d want the end user to see, as it shows that the whole thing is built upon the unpaid labour of everyone who uses the internet.

How would you call it? Only by their specific issues? Or would you use a general term, like “error” or “wrong”?

Calling the output by what it is (false, or immoral, or nonsensical) instead of a catch-all would be a progress, I think.


When it comes to the code itself you’re right, there’s no difference between “bug” and “not a bug”. The difference is how humans classify the behaviour.

And yet there’s a clear mismatch between what the developers of those large “language” models know that they’re able to do, versus what LLMs are being promoted for, and that difference is what is being called “hallucination”. They are not intelligent systems, the info that they output is not reliably accurate, it’s often useless rubbish. But instead of acknowledging it they label it “hallucination”.

Perhaps an example would be good here. Suppose that I made a text editor; it works nicely as a text editor and nothing much else. Then I make it automatically find and replace the string “=2+2” with “4”, and use it to showcase my text editor as if it was a calculator. “Look, it can do maths!”.

Then the user types down “=3+3”, expecting the “calculator” to output “6”, and it doesn’t. Can we really claim that the user found a “bug”? Not really. It’s just that I’m a phony and I sold him a text editor as if it was a calculator.

And yet that’s exactly what happens with LLMs.


This article shows rather well three reasons why I don’t like the term “hallucination”, when it comes to LLM output.

  1. It’s a catch-all term that describes neither the nature nor the gravity of the problematic output. Failure to address the prompt? False output, fake info? Immoral and/or harmful output? Pasting verbatim training data? Output that is supposed to be moderated against? It’s all “hallucination”.
  2. It implies that, under the hood, the LLM is “malfunctioning”. It is not - it’s doing what it is supposed to do, to chain tokens through weighted probabilities. Contrariwise to the tech bros’ wishful belief, LLMs do not pick words based on the truth value or morality of the output. That’s why hallucinations won’t go away, at least not for the current architecture of text generators.
  3. It lumps together those incorrect outputs with what humans would generate on situations of poor reasoning. This “it works like a human” metaphor obscures what happens, instead of clarifying it.

On the main topic of the article. Are LLMs useful? Sure! I use them myself. However only a fool would try to shove LLMs everywhere, with no regards to how intrinsically [yes] unsafe they are. And yet it’s what big tech is doing, regardless of being Chinese or United-Statian or Russian or German or whatever.


That’s a good text. I’ve been comparing those “LLM smurt!” crowds with Christian evangelists, due to their common usage of fallacies like inversion of burden of proof, changing goalposts, straw man, etc.

However it seems that people who believe in psychics might be a more accurate comparison.

That said LLMs are great tools to retrieve info when you aren’t too concerned about accuracy, or when you can check the accuracy yourself. For example the ChatGPT output of prompts like

  • “Give me a few [language] words that can be used to translate the [language] word [word]”
  • “[Decline|Conjugate] the [language] word [word]”
  • “Spell-proof the following sentence: [sentence]”

is really good. I’m still concerned about the sheer inefficiency of the process though, energy-wise.


Home pages? Can we please have the ⟨blink⟩ tag and the “UNDER CONSTRUCTION” .gif again? Those were of utmost importance!

Okay, serious now. I might be wrong but I think that the whole internet is going full circle, and that what the link describes towards homepages is part of a bigger process, of re-decentralisation. It isn’t just about getting news from homepages instead of social media; it’s also about how we find content (again, through human recommendation) and who owns it (individuals or small groups, as the ad “industry” is going kaput). It don’t think that’ll be exactly the same as the 90s/00s internet but similar in spirit.


Choosing weaker forms sounds sensible - my criticism is which ones.

Many people react that way but think about it a little more. It’s a fact. Mutliuple Black people have proven it repeatedly.

Yup, I know that it’s a fact. You aren’t being fallacious, but the way that you phrased it sounds like that fallacy, so it’s a matter of clarity.

It’s the same deal as the “post less”, you know? People are misunderstanding you.

[from the other comment] That’s a great point, can I quote you on having seen it on Lemmy quite a few times?

Feel free to do so! However keep in mind that I didn’t really keep track of them, so if someone says “do you have proofs?” I have no way to back it up.


In the context, the author isn’t saying “you should reduce your whole Fediverse activity”. It’s more like “when talking about this stuff, if you aren’t black, think before you say something. And you probably don’t need to say it, it’s better to shut up”.

It’s sensible advice even if worded poorly.


[Replying to myself to avoid editing the above]

Another point that I’d like to highlight is that a lot of the racism in English is proxied through linguistic prejudice, due to the existence of racial varieties like the African-American Vernacular English ones. For example, picking on people who use habitual-be, or specific words/expressions common among AAVE speakers. It is racist and I’ve seen it here [in Lemmy] quite a few times.


What a wonderful user experience! No rice is fine if connection is down, right?


Readers, beware

I don’t see myself as part of a racially marginalised group, and I’m no expert on racial issues. (I’m just a translator with some bg on Linguistics).

I’m also from LatAm. I expect most readers here to be from CA/USA; be aware that racial marginalisation works through different ways in both sides.

Because of both please take what I say with a grain of salt. I hope that I’m contributing.

I like where this text is going. As such, my criticism here is mostly on better ways to convey some points, plus additional info.

Title + Intro

Subbing “start making” for “make” highlights better that every little change matters, and is easier to read.

In this context “more welcoming” says the same as “less toxic”, but the former should be better to “sell” to the readers the idea that they can and should contribute. (Plus the word “toxic” is bound to make some people roll eyes and ignore the message.)

  1. Listen more to more Black people

It would be great if your text addressed people who shut up marginalised groups while claiming to speak in their names; it sounds a lot like “I’m an ally so chrust me, you don’t need to listen to [group], lissen to ME! ME! ME! instead.” I’ve seen this too often in social media, including here. Black people probably have a lot more to talk about this than I do.

  1. Post less – and think before you post

Simply saying “post less” is bound to rub off people the wrong way, specially when removed from context (plenty people won’t read the section past that), as plenty people are aware that the Fediverse needs more content.

Sadly I’m not certain on a good way to rephrase this without erasing the message. (Perhaps merge it with #1? Just an idea.)

Stop asking Black people for evidence [… whole paragraph]

I believe that the conclusion within this bullet point is accurate and moral, but the whole package needs some serious rewording.

IMO a better approach here is to highlight that all those “excuuuuse me, where are the proofs that you’re subjected to racism in the Fediverse?” are a form of sealioning, regardless of the intention of the people asking it. Black person be asked once, they provide the bloody proof; be asked twice, they roll their eyes but still do it; be asked for the 1000th time, they get pissed and leave.

I’m saying this because, the way that this point is currently worded, it sounds fallacious (inversion of the burden of the proof). And even if most people can’t quite identify fallacies, it still ticks a lot of them off, they know that “something” is wrong.

Stop telling Black people that they’ll experience less racism if they change instances […]

It’s actually worse: it’s a form of racial segregation. It’s like telling them “you won’t experience racism if you sit in the back of the bus”.

Black people should feel comfortable to use the same spaces as everyone else.

Stop saying the fediverse […]

I think that this bullet point is perfect as it is. Just commenting on the underlying issue:

A lot of people here confuse personal experiences with general statements. Even if the Fediverse, in general, was friendlier or nicer towards marginalised groups, it doesn’t really matter when someone is pissed and trying to vent their bad experience, you know?

Also ablism

Just highlighting a typo. No issue with the message.


I’m afraid of both, too, but if I had to choose:

  • evil but smart people only harm you when it benefits them
  • dumb but good people harm you all the time

As such I see the later as far more dangerous. Although there’s an even worse group, that Altman likely belongs to - the dumb but evil ones.


Pretty much. But instead of adjusting it like “cook it for less/more time”, you say “it’s raw/mushy”. Or at least that what I think, based on the product info, but I might be wrong.

And… yeah, it’s all pretend. Just like “smart” some years ago.


  • Advanced fuzzy logic technology with AI (Artificial Intelligence) “learns” and adjusts the cooking cycle to get perfect results
  • Superior induction heating (IH) technology generates high heat and makes fine heat adjustments resulting in fluffier rice
  • “My Rice (49 Ways)” menu setting – Just input how the rice turned out, the rice cooker will make small changes to the cooking flow until it gets to the way you like it

Based on the description the so-called “AI” simply adjusts time based on user feedback. That would be hilarious if not so sad as a marketing device.


Not when it comes to the bans. They’re also issued on a whim but there’s that implicit knowledge that you’re going to reset your IP, delete your cookies, and the ban will last exactly five minutes. Even if it’s supposed to last six years (like the time that I was banned for asking “I’m 12 and what is this?”, and the newbie mod wasn’t aware of the meme).

(USER WAS BANNED FOR THIS POST)


*slow clap*

I’m glad that I never even touched Something Awful, I went straight for 4chan. A platform that allows moderators to disguise an effective permaban (100kh) as if it was a tempban (“probation”) deserves absolutely no respect.

Finally, Etherwind was right. Wraith’s 20th Anniversary Edition didn’t come out in 2016.

AND THEY HATED HIM BECAUSE HE TOLD THEM THE TRUTH.


I find interesting that you’re using the bot for something else (coding aid) than me (translation aid), and yet the conclusion is the same - you need previous knowledge of what you’re asking to sort out the bullshit being generated by a system that doesn’t really know anything.


You’re describing the “plateau of productivity”, which is built into this theory:

Not quite - the plateau represents all useful ways that the tech can be used; what I’m describing is not even a fraction of that, it’s only the stuff that is too obvious to pass.

Thank you for the link - I’m going to give it a check now!


It does seem to be the case for me - I’ve seen far less overexcited comments, and far more questioning its present and future abilities to perform.

I think that at the bottom of the trough people will stop caring about it altogether except for functionality that has been proven to be useful for - such as a translation aid, coding aid, or to quickly pull off multiple images if you don’t mind overall quality.

I just generated a fresh example highlighting it. I’ll put it inside spoilers to avoid clutter:

German and English, ChatGPT 3.5
  • [Input 1] translate the following German sentence into English: “der Hund ist schlecht”.
  • [Output1] The dog is bad.
  • [Input 2] Which word is being translated as “bad”?
  • [Output 2] The word “schlecht” is being translated as “bad” in this context.
  • [Input 3] What are other meanings or connotations of the word “schlecht”?
  • [Output 3] In addition to “bad,” the word “schlecht” in German can also mean “poor,” “inferior,” “evil,” or “unwell,” depending on the context in which it is used.
  • [Input 4] Now, translate the sentence “the bad dog is sleeping” into German.
  • [Output 4] “Der schlechte Hund schläft.”
  • [Input 5] Why is it “schlechte” in this sentence, instead of “schlecht” as in the earlier sentence?
  • [Output 5] In German, adjectives change their form based on the gender, number, and case of the noun they are describing. In this case, “Hund” is a masculine noun in the nominative case, so the adjective “bad” (schlecht) needs to take the masculine nominative form, which is “schlechte.”

It’s working rather well as a translation aid as long as you know the language, as it could help you to phrase a sentence. However, once you ask for specific info - as I did in input #5 - it drops the ball; in this case outputting technically correct but misleading and incomplete info:

  • The form also depends on the article. For example, if I were to use “ein” (a) instead of “der” (the), the correct form would be “schlechter
  • The way that the output is phrased, it implies that the “schlecht” in the first sentence is not describing a masculine noun in the nominative. It is - and the actual reason why it takes the form “schlecht” is not mentioned in the output (predicatives take the base form).

[und nebenbei: alle Hunde sind gut.]

In the “overexcitement peak”, I think that a lot of people were focusing on the correct output, and assuming that the rest would be solved “with enough elbow grease”, playing whack-a-mole with hallucinations. Now they’re noticing that it isn’t.


I am, too. Sadly, I didn’t find any info on the specifics.

Perhaps it’s something extremely simple; like a device that generates a string that proves that you’re 18+, but the string being generated doesn’t lead to you, it’s just some random number. But that’s just my conjecture.

(If anyone is interested on the concept in general grounds, Wikipedia has a nice article on that.)


That’s some damn great text. It avoids all that discussion about free will, as it focuses on autonomy instead; and it shuts off the “BuT I HaVe NoTHiNg tO HiDe” discourse right off the bat, by mentioning that being watched does change your behaviour

(BTW, dunno if you guys noticed, but this “nothing to hide” discourse has often the implicit accusation: “since you seek anonymity, you’re assumed to be a shitty person”.)

A. A Zero-Knowledge Proof system is being trialled at the BBC. Imagine that a minor wants to watch a program for people over 18 years of age. Through this system, which provides a verified identity, the chain will know if the person is of legal age or not.

Frankly, I feel like the main benefit of such a system won’t be child protection, but shutting up abusive entities that babble shit like “think on the children!”.

A. Any decision that can significantly affect a person’s life. AI is not a moral agent, it cannot be responsible for harming someone or denying them an important opportunity. Nor should we delegate to AI jobs in which we value the empathy of a fellow citizen who can understand what we feel.

Emphasis mine. *slow clap*


Yeah, it’s actually good. People use it even for trivial stuff nowadays; and you don’t need a pix key to send stuff, only to receive it. (And as long as your bank allows you to check the account through an actual computer, you don’t need a cell phone either.)

Perhaps the only flaw is shared with the Asian QR codes - scams are a bit of a problem, you could for example tell someone that the transaction will be a value and generate a code demanding a bigger one. But I feel like that’s less of an issue with the system and more with the customer, given that the system shows you who you’re sending money to, and how much, before confirmation.

I’m not informed on Tikkie and Klarna, besides one being Dutch and another Swedish. How do they work?


Brazil ended with a third system: Pix. It boils down to the following:

  • The money receiver sends the payer either a “key” or a QR code.
  • The payer opens their bank’s app and use it to either paste the key or scan the QR code.
  • The payer defines the value, if the code is not dynamic (more on that later).
  • Confirm the transaction. An electronic voucher is emitted.

The “key” in question can be your cell phone number, physical/juridical person registre number, e-mail, or even a random number. You can have up to five of them.

Regarding dynamic codes, it’s also possible to generate a key or QR code that applies to a single transaction. Then the value to be paid is already included.

Frankly the system surprised me. It’s actually good and practical; and that’s coming from someone who’s highly suspicious of anything coming from the federal government, and who hates cell phones. [insert old man screaming at clouds meme]


Do you mind if I address this comment alongside your other reply? Both are directly connected.

I was about to disagree, but that’s actually really interesting. Could you expand on that?

If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with “made by AI”. To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.

In other words, to lie without getting caught you’re getting rid of what makes the output problematic on first place. The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one. Those are the ones who’d get caught, because they’re doing what you called “dumb” (and I agree) - not proof-reading their output.

Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.


Sometimes. Sometimes it’s more accurate than anyone in the village.

So does the village idiot. Or a tarot player. Or a coin toss. And you’d still need to be a fool if your writing relies on the output of those three. Or of a LLM bot.

And it’ll be reliably getting better.

You’re distorting the discussion from “now” to “the future”, and then vomiting certainty on future matters. Both things make me conclude that reading your comment further would be solely a waste of my time.


3. If you lie about it and get caught people will correctly call you a liar, ridicule you, and you lose trust. Trust is essential for content creators, so you’re spelling your doom. And if you find a way to lie without getting caught, you aren’t part of the problem anyway.


For writers, that “no AI” is not just the equivalent of “100% organic”; it’s also the equivalent as saying “we don’t let the village idiot to write our texts when he’s drunk”.

Because, even as we shed off all paranoia surrounding A"I", those text generators state things that are wrong, without a single shadow of doubt.


Think on the available e-books as a common pool, from the point of view of the people buying them: that pool is in perfect condition if all books there are DRM-free, or ruined if all books are infested with DRM.

When someone buys a book with DRM, they’re degrading that pool, as they’re telling sellers “we buy books with DRM just fine”. And yet people keep doing it, because:

  • They had an easier time finding the copy with DRM than a DRM-free one.
  • The copy with DRM might be cheaper.
  • The copy with DRM is bought through services that they’re already used to, and registering to another service is a bother.
  • If copy with DRM stops working, that might be fine, if the buyer only needed the book in the short term.
  • Sharing is not a concern if the person isn’t willing to share on first place.
  • They might not even know what’s the deal, so they don’t perceive the malus of DRM-infested books.

So in a lot of situations, buyers beeline towards the copy with DRM, as it’s individually more convenient, even if ruining the pool for everyone in the process. That’s why I said that it’s a tragedy of the commons.

As you correctly highlighted that model relies on the idea that the buyer is selfish; as in, they won’t care about the overall impact of their actions on the others, only on themself. That is a simplification and needs to be taken with a grain of salt, however note that people are more prone to act selfishly if being selfless takes too much effort out of them. And those businesses selling you DRM-infested copies know it - that’s why they enclose you, because leaving that enclosure to support DRM-free publishers takes effort.

I guess in the end we are talking about the same

I also think so. I’m mostly trying to dig further into the subject.

So the problem is not really consumer choice, but rather that DRM is allowed in its current form. But I admit that this is a different discussion

Even being a different discussion, I think that one leads to another.

Legislating against DRM might be an option, but easier said than done - governments are specially unruly, and they’d rather support corporations than populations.

Another option, as weird as it might sound, might be to promote that “if buying is not owning, pirating is not stealing” discourse. It tips the scale from the business’ PoV: if people would rather pirate than buy books with DRM, might as well offer them DRM-free to increase sales.


Does this mean that I need to wait until September to reply? /jk

I believe that the problem with the neolibs in this case is not the descriptive model (tragedy of the commons) that they’re using to predict a potential issue; it’s instead the “magical” solution that they prescribe for that potential issue, that “happens” to align with their economical ideology, while avoiding to address that:

  • in plenty cases privatisation worsens the erosion of the common resource, due to the introduction of competition;
  • the model applies specially well to businesses, that behave more like the mythical “rational agent” than individuals do;
  • what you need to solve the issue is simply “agreement”. Going from “agreement” to “privatise it!!!1one” is an insane jump of logic from their part.

And while all models break if you look too hard at them, I don’t think that it does in this case - it explains well why individuals are buying DRM-stained e-books, even if this ultimately hurts them as a collective, by reducing the availability of DRM-free books.

(And it isn’t like you can privatise it, as the neolibs would eagerly propose; it is a private market already.)

I’m reading the book that you recommended (thanks for the rec, by the way!). Under a quick glance, it seems to propose self-organisation as a way to solve issues concerning common pool resources; it might work in plenty cases but certainly not here, as there’s no way to self-organise people who buy e-books.

And frankly, I don’t know a solution either. Perhaps piracy might play an important and positive role? It increases the desirability of DRM-free books (you can’t share the DRM-stained ones), and puts a check on the amount of obnoxiousness and rug-pulling that corporations can submit you to.


This is going to be interesting. I’m already thinking on how it would impact my gameplay.

The main concern for me is sci packs spoiling. Ideally they should be consumed in situ, so I’d consider moving the research to Gleba and ship other sci packs to it. This way, if something does spoil at least the spoilage is near where I can use it. Probably easier said than done - odds are that other planets have “perks” that would make centralising science there more convenient.

You’ll also probably want to speed up the production of the machines as much as possible, since the products inherit spoilage from the ingredients. Direct insertion, speed modules, upgrading machines ASAP will be essential there - you want to minimise the time between the fruit being harvested and outputting something that doesn’t spoil (like plastic or science).

Fruits outputting pulp and seeds also hint me an oil-like problem, as you need to get rid of byproducts that you might not be using. Use only the seeds and you’re left with the pulp; use only the pulp and you’re left with the seeds. The FFF hints that you can burn stuff, but that feels wasteful.


I don’t think that mass production is doing it alone, but that it’s a factor. It’s what prevents GameFreak from changing the core gameplay of the game; and without meaningful changes to core gameplay, they need to attract players through other ways.

And one of those ways is making the mons of a newer gen stronger than the ones of the gen before. (Another is introducing “gimmick mechanics” that get forgotten in the next gen.)



I also apologise for the tone. That was a knee-jerk reaction from my part; my bad.

(In my own defence, I’ve been discussing this topic with tech bros, and they rather consistently invert the burden of the proof. Often to evoke Brandolini’s Law. You probably know which “types” I’m talking about.)

On-topic. Given that “smart” is still an internal attribute of the blackbox, perhaps we could gauge better if those models are likely to become an existential threat by 1) what they output now, 2) what they might output in the future, and 3) what we [people] might do with it.

It’s also easier to work with your example productively this way. Here’s a counterpoint:


The prompt asks for eight legs, and only one pic was able to output it correctly; two ignored it, and one of the pics shows ten legs. That’s 25% accuracy.

I believe that the key difference between “your” unicorn and “my” eight-legged dragon is in the training data. Unicorns are fictitious but common in popular culture, so there are lots of unicorn pictures to feed the model with; while eight-legged dragons are something that I made up, so there’s no direct reference, even if you could logically combine other references (as a spider + a dragon).

So their output is strongly limited by the training data, and it doesn’t seem to follow some strong logic. What they might output in the future depends on what we add in; the potential for decision taking is rather weak, as they wouldn’t be able to deal with unpredictable situations. And thus their ability to go rogue.

[Note: I repeated the test with a horse instead of a dragon, within the same chat. The output was slightly less bad, confirming my hypothesis - because pics of eight-legged horses exist due to the Sleipnir.]

Neural nets

Neural networks are a different can of worms for me, as I think that they’ll outlive LLMs by a huge margin, even if the current LLMs use them. However, how they’ll be used is likely considerably different.

For example, current state-of-art LLMs are coded with some “semantic” supplementation near the embedding, added almost like an afterthought. However, semantics should play a central role in the design of the transformer - because what matters is not the word itself, but what it conveys.

That would be considerably closer to a general intelligence than to modern LLMs - because you’re effectively demoting language processing to input/output, that might as well be subbed with something else, like pictures. In this situation I believe that the output would be far more accurate, and it could theoretically handle novel situations better. Then we could have some concerns about AI being an existential threat - because people would use this AI for decision taking, and it might output decisions that go terribly right, as in that “paperclip factory” thought experiment.

The fact that we don’t see developments in this direction yet shows, for me, that it’s easier said than done, and we’re really far from that.


They’re even more exploration-heavy than Emerald. Roughly, the earlier the game, the bigger the focus on exploration, as hardware limitations didn’t allow much storytelling.

Also, I recommend playing their remakes instead of the original games; the originals are extremely buggy and have huge balance issues. (For example, there’s a shore in Red/Blue that you can use to catch Safari Zone mons. And Psychic mons are crazy overpowered - the only Ghosts in the region are partially Poison, there’s a lot of other Poison types, and since Gen1 was before the special split they got huge offensive and defensive capabilities.)


The thing is that they’re complying with the court case by letter, but not by spirit. Sure, there is a system to report and remove copyright infringement; but the system is 100% automated, full of fails that would require manual review, and Google can’t be arsed to spend the money necessary to fix it.


In your case I wouldn’t recommend trumpets and water Emerald then, as it’s exploration-heavy - there’s huge routes, and often what you want is in a specific place. You’ll probably have a great time with Gen 4 instead, specially Platinum.