• 2 Posts
  • 159 Comments
Joined 1Y ago
cake
Cake day: Jun 11, 2023

help-circle
rss

It’s “whataboutism” in the sense we’re interrogating focus. Why do you think white ethnonationalists spend so much time asserting “white lives matter?” Because there’s only so much air in the room, and they know giving air to one cause deprives another.

I think it’s worth wondering why people spend so much time discussing Israel/Palestine and so little discussing other issues that are at least as large from a “people impacted” perspective. Obviously there’s also an African infantilization (that is to say, racist) double standard here — we simply don’t expect Africa to have human rights. But I would say there is certainly also an Israel double standard, and it is antisemitic in the same way saying “well of course Sierra Leone is a hellhole, there’s no news there” is racist.

You are not a news outlet. But you choose what you’re spending your time and effort on. And it is this. I think many people don’t interrogate why they get so involved and what their opinions actually mean in terms of what their focus accomplished and what it broadcasts.

I apologize for choosing you as the vehicle for this message; I don’t mean to attack you personally. There are a ton of people doing this and your message was as good as any other to demonstrate my point.


Higher expectations is reasonable! Would you say two times higher? Ten times higher? A hundred times higher?

As a baseline, how much have you posted about Sierra Leone and the human rights abuses there in the last year?


Your second point is entirely correct; see also self-hating gays in the Log Cabin Republicans.

I think the shield for your first point is pretty narrow these days. About a decade ago that point held a lot more salience, but as my “new antisemitism” link discusses, the position has been adopted so vigorously by antisemites that I think it is indeed very close to antisemitic unless deployed extremely carefully.

Yes, criticism of Israel is not inherently antisemitic. But since this canard is so often invoked by idle and ignorant spectators, with no real understanding of Israeli or Palestinian politics, inserting themselves into a fraught and unhappy situation, usually specifically to criticize or delegitimize only Israel… it’s tough to see how that isn’t a special standard applied only to Israel. Or, worse, it’s invoked by real antisemites hoping to get bystanders on-side with actual antisemitism by cloaking it as criticism of Israel.

As a concrete example of this new antisemitism – in 2017, Hamas altered its charter, which was wildly and outright antisemitic, to specifically state that it doesn’t actually want to kill all Jews as previously stated, but only the occupiers of Palestine. Given their actions, the huge amount of specifically anti-Jewish sentiment in Gaza, and even the incredibly virulent language in their old charter, do you think they actually changed their minds about Jews? Or are they simply cloaking their antisemitism in a package that more people might agree with these days? A new kind of antisemitism?


How in the world did this person call anyone an antisemite? Are you responding to the right post?

Nothing they do will ever cause the international outrage any other country’s actions would cause.

Israel has 45.9% of all UNHRC condemnations ever passed, passed at it. Do you believe Israel is committing 45.9% of all human rights atrocities on Earth right now?

You are right Israel is measured by double standards, but it’s not that its actions produce less outrage than other countries’ – they produce far more. This is new antisemitism.

It’s not actually necessary to rake Israel over the coals more than other countries. Doing so is a double standard. Sierra Leone has roughly the population of Israel; If you aren’t holding it to task for its human rights abuses as much as you are Israel, you are engaged in that double standard.


I was reading about new antisemitism the other day and I thought it was interesting.

Most canny antisemites have turned to the old (and formerly totally fine) canard “criticizing Israel is not antisemitic” to shield their actual antisemitic criticism. Not wanting to call Hamas a terrorist organization is a perfect example. They’re only terrorizing Israel, which isn’t inherently antisemitic! /s

But yeah it’s really everywhere now. Sometimes it’s mask off as in this incident. But frequently it’s mask on.


It was tulips all along, but stupider.

Your day is coming soon, cryptocurrency.


This title definitely makes it sound like this is a Democrat policy goal or that Democrats are actually responsible for this, when actually, as the article gradually makes clear, the people responsible for this are opposed to mainstream Democrat goals:

Democratic lawmakers and the Joe Biden administration have touted a wealth tax as a way to tackle record levels of inequality and fund programs that slash poverty and expand access to health care and education.

The people involved are not politicians. They are an advocacy group and apparently unaffiliated with the Democratic organization at large. The main guy seems as “Democrat” as Tulsi Gabbard, since he spent a lot of time and energy defending Trump and his policies on various talk shows.

Anyway, kind of a disingenuous framing.


It’s entirely not irrelevant. Even if you create a program to evolve pong, that was also designed by a human. As a computer programmer you should know that no computer program will just become pong, what an idiotic idea.

You just keep pivoting away from how you were using words to them meaning something entirely different; this entire argument is worthless. At least LLMs don’t change the definitions of the words they use as they use them.


I’m giving up here but evolution did not “design” us. LLMs are designs and created with a purpose in mind and they fulfill that purpose. Humans were not designed.


If you truly believe humans are simply autocompletion engines then I just don’t know what to tell you. I think most reasonable people would disagree with you.

Humans have actual thoughts and emotions; LLMs do not. The neural networks that LLMs use, while based conceptually in biological neural networks, are not biological neural networks. It is not a difference of complexity, but of kind.

Additionally, no matter how many statistics, CPU power, or data you give an LLM, it will not develop cognition because it is not designed to mimic cognition. It is designed to link words together. It does that and nothing more.

A dog is more sentient than an LLM in the same way that a human is more sentient than a toaster.


LLMs do not “teach,” and that is why learning from them is dangerous. They synthesize words and return other words, but they do not understand the content presented to them in any sense. Because of this, there is the chance that they are simply spouting bullshit.

Learn from them if you like, but remember they are absolutely no substitute for a human, and basically everything they tell you must be checked for correctness.


Lol… come on. Your second source disagrees with your assertion:

Via all three analyses, we provide evidence that alleged emergent abilities evaporate with different metrics or with better statistics, and may not be a fundamental property of scaling AI models.

You are wrong and it is quite settled. Read more, including the very sources you’re trying to recommend others read.


The two types of loops you equivocate are totally different; saying that a computer executing a program, and an animal living, are actually the same, is very silly indeed. Like, air currents have a “core loop” of blowing around a lot but no one says that they’re intelligent or that they’re like computer programs or humans.

You’ve ignored my main complaint. I said that you treat LLMs and humans at different levels of abstraction:

No; you are analogizing them but losing sense of their differences in the process. I am not abstracting LLMs. That is all they do. That is what they were designed to do and what they accomplish.

You are drawing a comparison between a process humans have that generates consciousness, and literally the entirety of an LLM’s existence. There is nothing else to an LLM. Whereas if you say “well a human is basically just bouncing electro-chemical signals between neurons and moving muscles” people (like me) would rightly say you were missing the forest for the trees.

The “trees” for an LLM are their neural networks and word vectors. The forest is a word prediction algorithm. There is no higher level to what they do.


That’s a fair assessment but besides the point: A thermostat has an internal state it can affect (the valve), is under its control and not that of silly humans (that is, not directly) aka an internal world.

I apologize if I was unclear when I spoke of an internal world. I meant interior thoughts and feelings. I think most people would agree sentience is predicated on the idea that the sentient object has some combination of its own emotions, motivations, desires, and ability to experience the world.

LLMs have as much of that as a thermostat does; that is, zero. It is a word completion algorithm and nothing more.

Your paper doesn’t bother to define what these T-systems are so I can’t speak to your categorization. But I think rating the mental abilities of thermostats versus computers versus ChatGPT versus human minds totally absurd. They aren’t on the same scale, they’re different kinds of things. Human minds have actual sentience. Everything else in that list is a device, created by humans, to do a specific task and nothing more. None of them are anything more than that.


LLMs already do quite a few things they were not designed to do.

No; they do exactly what they were designed to do, which is convert words to vectors, do math with them, and convert it back again. That we’ve find more utility in this use does not change their design.

What if “the internet” developed some form of self-awareness - would we know?

Uh what? Like how would it? This is just technomystical garbage. Enough data in one place and enough CPU in one place doesn’t magically make that place sentient. I love it as a book idea, but this is real life.

What about feedback and ability to self-modify?

This would be a significant design divergence from what LLMs are, so I’d call those things something different.

But in any event that still would not actually give LLMs anything approaching: thoughts, feelings, or rationality. Or even the capability to understand what they were operating on. Again, they have none of those things and they aren’t close to them. They are word completion algorithms.

Humans are not word completion algorithms. We have an internal existence and thought process that LLMs do not have and will never have.

Perhaps at some point we will have true artificial intelligence. But LLMs are not that, and they are not close.


They have no core loop. You are anthropomorphizing them. They are literally no more self-directed than a calculator, and have no more of a “core loop” than a calculator does.

Do you believe humans are simply very advanced and very complicated calculators? I think most people would say “no.” While humans can do mathematics, we are different entirely to calculators. We experience sentience; thoughts, feelings, emotions, rationality. None of the devices we’ve ever built, no matter how clever, has any of those things: and neither do LLMs.

If you do think humans are as deterministic as a calculator then I guess I don’t know what to tell you other than I disagree. Other people actually exist and have internal realities. LLMs don’t. That’s the difference.


By telling me you are.

If you ask ChatGPT if it is sentient, or has any thoughts, or experiences any feelings, what is its response?

But suppose it’s lying.

We also understand the math underlying it. Humans designed and constructed it; we know exactly what it is capable of and what it does. And there is nothing inside it that is capable of thought or feeling or even rationality.

It is a word generation algorithm. Nothing more.


we can be expressed as algorithms

Wow, do you have any proof of this wild assertion? Has this ever been done before or is this simply conjecture?

a thermostat also has an internal world

No. A thermostat is an unthinking device. It has no thoughts or feelings and no “self.” In this regard it is the same as LLMs, which also have no thoughts, feelings, or “self.”

A thermostat executes actions when a human acts upon it. But it has no agency and does not think in any sense; it does simply what it was designed to do. LLMs are to language as thermostats are to controlling HVAC systems, and nothing more than that.

There is as much chance of your thermostat gaining sentience if we give it more computing power as an LLM.


I think it’s hilarious you aren’t listening to anyone telling you you’re wrong, even the bot itself. Must be nice to be so confident.


I would encourage you to ask ChatGPT itself if it is intelligent or performs reasoning.


I think I write well :) I am not an LLM though.


Even if they are a result of complexity, that still doesn’t change the fact that LLMs will never be complex in that manner.

Again, LLMs have no self-awareness. They are not designed to have self-awareness. They do not have feelings or emotions or thoughts; they cannot have those things because all they do is generate words in response to queries. Unless their design fundamentally changes, they are incompatible with consciousness. They are, as I’ve said before, complicated autosuggestion algorithms.

Suggesting that throwing enough hardware at them will change their design is absurd. It’s like saying if you throw enough hardware at a calculator, it will develop sentience. But a calculator will not do that because all it’s programmed to do is add numbers together. There’s no hidden ability to think or feel lurking in its design. So too LLMs.


Those two things can be true at the same time.

No, they can’t. The question is fundamentally: do humans have any internal thoughts or feelings, or are they algorithms? If you believe other people aren’t literally NPCs, then they are not LLMs.


May as well ask me to prove that we know enough about calculators to say they won’t develop sentience while I’m at it.


It is not hand-waving; it is the difference between an LLM, which, again, has no cognizance, no agency, and no thought – and humans, which do. Do you truly believe humans are simply mechanistic processes that when you ask them a question, a cascade of mathematics occurs and they spit out an output? People actually have an internal reality. For example, they could refuse to answer your question! Can an LLM do even something that simple?

I find it absolutely mystifying you claim you’ve studied this when you so confidently analogize humans and LLMs when they truly are nothing alike.



It is not a model of objects. It’s a model of words. It doesn’t know what those words themselves mean or what they refer to; it doesn’t know how they relate together, except that some words are more likely to follow other words. (It doesn’t even know what an object is!)

When we say “cat,” we think of a cat. If we then talk about a cat, it’s because we love cats, or hate them, or want to communicate something about them.

When an LLM says “cat,” it has done so because a tokenization process selected it from a chain of word weights.

That’s the difference. It doesn’t think or reason or feel at all, and that does actually matter.


No one is saying there’s problems with the bots (though I don’t understand why you’re being so defensive of them – they have no feelings so describing their limitations doesn’t hurt them).

The problem is what humans expect from LLMs and how humans use them. Their purposes is to string words together in pretty ways. Sometimes those ways are also correct. Being aware of what they’re designed to do, and their limitations, seems important for using them properly.


A chatbot has no use for that, it’s just there to mush through lots of data and produce some, it doesn’t have or should worry about its own existence.

It literally can’t worry about its own existence; it can’t worry about anything because it has no thoughts or feelings. Adding computational power will not miraculously change that.

Add some long term memory, bigger prompts, bigger model, interaction with the Web, etc. and you can build a much more powerful bit of software than what we have today, without even any real breakthrough on the AI side.

I agree this would be a very useful chatbot. But it is still not a toaster. Nor would it be conscious.


No one is saying “they’re useless.” But they are indeed bullshit machines, for the reasons the author (and you yourself) acknowledged. Their purposes is to choose likely words. That likely and correct are frequently the same shouldn’t blind us to the fact that correctness is a coincidence.


Obviously you should do what you think is right, so I mean, I’m not telling you you’re living wrong. Do what you want.

The reason to not trust a human is different from the reasons not to trust an LLM. An LLM is not revealing to you knowledge it understands. Or even knowledge it doesn’t understand. It’s literally completing sentences based on word likelihood. It doesn’t understand any of what it’s saying, and none of it is rooted in any knowledge of the subject of any kind.

I find that concerning in terms of learning from it. But if it worked for you, then go for it.


People think they are actually intelligent and perform reasoning. This article discusses how and why that is not true.


It is your responsibility to prove your assertion that if we just throw enough hardware at LLMs they will suddenly become alive in any recognizable sense, not mine to prove you wrong.

You are anthropomorphizing LLMs. They do not reason and they are not lazy. The paper discusses a way to improve their predictive output, not a way to actually make them reason.

But don’t take my word for it. Go talk to ChatGPT. Ask it anything like this:

“If an LLM is provided enough processing power, would it eventually be conscious?”

“Are LLM neural networks like a human brain?”

“Do LLMs have thoughts?”

“Are LLMs similar in any way to human consciousness?”

Just always make sure to check the output of LLMs. Since they are complicated autosuggestion engines, they will sometimes confidently spout bullshit, so must be examined for correctness. (As my initial post discussed.)


No, it’s true, “luck” might be overstating it. There’s a good chance most of what it says is as accurate as the corpus it was trained on. That doesn’t personally make me very confident, but ymmv.


I’m not guessing. When I say it’s a difference of kind, I really did mean that. There is no cognition here; and we know enough about cognition to say that LLMs are not performing anything like it.

Believing LLMs will eventually perform cognition with enough hardware is like saying, “if we throw enough hardware at a calculator, it will eventually become alive.” Even if you throw all the hardware in the world at it, there is no emergent property of a calculator that would create sentience. So too LLMs, which really are just calculators that can speak English. But just like calculators they have no conception of what English is and they do not think in any way, and never will.


Basically the problem is point 3.

You obviously know some of what it’s telling you is inaccurate already. There is the possibility it’s all bullshit. Granted a lot of it probably isn’t, but it will tell you the bullshit with the exact same level of confidence as actual facts… because it doesn’t know Galois theory and it isn’t teaching it to you, it’s simply stringing sentences together in response to your queries.

If a human were doing this we would rightly proclaim the human a bad teacher that didn’t know their subject, and that you should go somewhere else to get your knowledge. That same critique should apply to the LLM as well.

That said it definitely can be a useful tool. I just would never fully trust knowledge I gained from an LLM. All of it needs to be reviewed for correctness by a human.


Yeah definitely not saying it’s not useful :) But it also doesn’t do what people widely believe it does, so I think articles like this are helpful.


LLMs are fundamentally different from human consciousness. It isn’t a problem of scale, but kind.

They are like your phone’s autocomplete, but very very good. But there’s no level of “very good” for autocomplete that makes it a human, or will give it sentience, or allow it to understand the words it is suggesting. It simply returns the next most-likely word in a response.

If we want computerized intelligence, LLMs are a dead end. They might be a good way for that intelligence to speak pretty sentences to us, but they will never be that themselves.


And also it’s no replacement for actual research, either on the Internet or in real life.

People assume LLMs are like people, in that they won’t simply spout bullshit if they can avoid it. But as this article properly points out, they can and do. You can’t really trust anything they output. (At least not without verifying it all first.)


They only use words in context, which is their problem. It doesn’t know what the words mean or what the context means; it’s glorified autocomplete.

I guess it depends on what you mean by “information.” Since all of the words it uses are meaningless to it (it doesn’t understand anything of what it either is asked or says), I would say it has no information and knows nothing. At least, nothing more than a calculator knows when it returns 7 + 8 = 15. It doesn’t know what those numbers mean or what it represents; it’s simply returning the result of a computation.

So too LLMs responding to language.



Mastodon?
I'm pretty new to the whole Lemmy thing, but I figured I'd ask peoples' opinions on whether it's worth it to get into Mastodon too now that I'm officially a member of the Fediverse. Is it active? Is it worth it? Have you had good experiences there?
fedilink