25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 254 Comments
Joined 1Y ago
cake
Cake day: Jun 14, 2023

help-circle
rss

She knows not to trust it. If the AI had suggested “God did it” or metaphysical bullshit I’d reevaluate. But I’m not sure how to even describe that to a Google search. Sending a picture and asking about it is really fucking easy. Important answers aren’t easy.

I mean I agree with you. It’s bullshit and untrustworthy. We have conversations about this. We have lots of conversations about it actually, because I caught her cheating at school using it so there’s a lot of supervision and talk about appropriate uses and not. And how we can inadvertently bias it by the questions we ask. It’s actually a great tool for learning skepticism.

But some things, a reasonable answer just to satisfy your brain is fine whether it’s right or not. I remember in chemistry I spent an entire year learning absolute bullshit about chemistry only for the next year to be told that was all garbage and here’s how it really works. It’s fine.


I don’t buy into it, but it’s so quick and easy to get an answer, if it’s not something important I’m guilty of using LLM and calling it good enough.

There are no ads and no SEO. Yeah, it might very well be bullshit, but most Google results are also bullshit, depending on subject. If it doesn’t matter, and it isn’t easy to know if I’m getting bullshit from a website, LLM is good enough.

I took a picture of discolorations on a sidewalk and asked ChatGPT what was causing them because my daughter was curious. Metal left on the surface rusts and leaves behind those streaks. But they all had holes in the middle so we decided there were metallic rocks missed into the surface that had rusted away.

Is that for sure right? I don’t know. I don’t really care. My daughter was happy with an answer and I’ve already warned her it could be bullshit. But curiosity was satisfied.


Fuck all that noise. Give me more Baldur’s Gate. I’m the biggest Star Wars fan and I haven’t bought a game since KotOR other than Survivor and the sequel. Because every time one catches my interest they start talking about all the cool DLC or things that are locked behind months or years of progression. I just won’t.


I’ve visited NY and Chicago, but I guess my digs were nice enough not to notice. And I used to live 75 minutes (assuming no traffic lol) from DC—far enough away that I didn’t have to deal with that kind of thing. Just like maybe some highway noise from far away.

I did once have a townhouse that had a rail track in the back yard, but I know what I was getting in that case. It was only noisy when there was a train.


I don’t think I’d want to live anywhere it is necessary to worry about sounds reduction levels. Wow.



Losing an expensive - in terms of lives and money - and pointless war is devastating domestically. Will anything really change? I don’t know, but there’s going to be a reaction.



An analysis I watched suggests that by occupying Russian territory, they force Russia to either let them dig in long term or stay on the offensive to try to push them out just when Russia would be slowing their summer offensive. Ukraine is fighting a defensive war of attrition and their strategy is to force Russia to keep attacking while Ukraine has a defensive advantage.

Now, this shit is way above my pay grade, but that analysis explains why this could be a smart move. So it sounds like there is reason to be cautiously optimistic.


I have a very feminist outlook on things, but I enjoy some problematic things. I know it’s not very progressive of me, but it is what it is. I acknowledge they are problematic.

Which is just to say, I don’t like this. I understand, but I’m not a fan.


That’s hilarious. I love LLM, but it’s a tool not a product and everyone trying to make it a standalone thing is going to be sorely disappointed.


The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input. How is that deterministic?

The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.


Not exactly. My argument is that the more safety controls you build into the model, the less useful the model is at anything. The more you bend the responces away from true (whatever that is) the less of the tool you have.

Whether you agree with that mentality or not, we live in a Statist world, and protection of its constituent people from themselves and others is the (ostensible) primary function of a State.

Yeah I agree with that, but I’m saying protect people from the misuse of the tool. Don’t break the tool to the point where it’s worthless.


Again a biometric lock neither prevents immoral use nor allows moral use outside of its very narrow conditions. It’s effectively an amoral tool. It presumes anything you do with your gun will be moral and other uses are either immoral or unlikely enough to not bother worrying about.

AI has a lot of uses compared to a gun and just because someone has an idea for using it that is outside of the preconceived parameters doesn’t mean it should be presumed to be immoral and blocked.

Further the biometric lock analogy falls apart when you consider LLM is a broad-scoped tool for use by everyone, while your personal weapon can be very narrowly scoped for you.

Consider a gun model that can only be fired by left-handed people because most guns crimes are committed by right-handed people. Yeah, you’re ostensibly preventing 90% of immoral use of the weapon but at the cost of it no longer being a useful tool for most people.


I think I’ve said a lot in comments already and I’ll leave that all without relitigating just for arguments sake.

However, I wonder if I haven’t made clear that I’m drawing a distinction between the model that generates the raw output, and perhaps the application that puts the model to use. I have an application that generates output via OAI API and then scans both the prompt and output to make sure they are appropriate for our particular use case.

Yes, my product is 100% censored and I think that’s fine. I don’t want the customer service bot (which I hate but that’s an argument for another day) at the airline to be my hot AI girlfriend. We have tools for doing this and they should be used.

But I think the models themselves shouldn’t be heavily steered because it interferes with the raw output and possibly prevents very useful cases.

So I’m just talking about fucking up the model itself in the name of safety. ChatGPT walks a fine line because it’s a product not a model, but without access to the raw model it needs to be relatively unfiltered to be of use, otherwise other models will make better tools.


There are biometric-restricted guns that attempt to ensure only authorized users can fire them.

This doesn’t prevent an authorized user from committing murder. It would prevent someone from looting it off of your corpse and returning fire to an attacker.

This is not a great analogy for AI, but it’s still effectively amoral anyway.

The argument for limiting magazine capacity is that it prevents using the gun to kill as many people as you otherwise could with a larger magazine, which is certainly worse, in moral terms.

This is closer. Still not a great analogy for AI, but we can agree that outside of military and police action mass murder is more likely than an alternative. That being said, ask a Ukrainian how moral it would be to go up against Russian soldiers with a 5 round mag.

I feel like you’re focused too narrowly on the gun itself and not the gun as an analogy for AI.

you could have a camera on the barrel of a hunting rifle that is running an object recognition algorithm that would only allow the gun to fire if a deer or other legally authorized animal was visible

This isn’t bad. We can currently use AI to examine the output of an AI to infer things about the nature of what is being asked and the output. It’s definitely effective in my experience. The trick is knowing what questions to ask about in the first place. But for example OAI has a tool for identifying violence, hate, sexuality, child sexuality, and I think a couple of others. This is promising, however it is an external tool. I don’t have to run that filter if I don’t want to. The API is currently free to use, and a project I’m working on does use it because it allows the use case we want to allow (describing and adjudicating violent actions in a chat-based RPG) while still allowing us to filter out more intimate roleplaying actions.

An object doesn’t have to have cognition that it is trying to do something moral, in order to be performing a moral function.

The object needs it to differentiate between allowing moral use and denying immoral use. Otherwise you need an external tool for that. Or perhaps a law. But none of that interferes with the use of the tool itself.


That’s a fair argument about free speech maximalism. And yes you can influence output, but (being non-deterministic) since we can’t know precisely what causes certain outputs, we equally can’t fully predict the effect on potentially unrelated output. Great now it’s harder to talk about sex with kids, but now it’s also harder for kids to talk about certain difficult experiences for example if their trying to keep a secret but also need a non-judgmental confidante to help them process a difficult experience.

Now, is it critical that the AI be capable of that particular conversation when we might prefer it happen with a therapist or law enforcement? That’s getting into moral and ethical questions so deep I as a human struggle with them. It’s fair to believe the benefit of preventing immoral output outweighs the benefit of allowing the other. But I’m not sure that is empirically so.

I think it’s more useful to us as a society to have an AI that can assume both a homophobic perspective and an ally perspective than one that can’t adopt either or worse, one that is mandated to be homophobic for morality reasons.

I think it’s more useful to have an AI that can offer religious guidance and also present atheism in a positive light. I think it’s useful to have an AI that can be racist in order to understand how that mind disease thinks and find ways to combat it.

Everything you try to censor out of an AI has an unknown cost in beneficial uses. Maybe I am overly absolutist in how I see AI. I’ll grant that. It’s just that by the time we think of every malign use to which an AI can be put and censor everything it can possibly say, I think you don’t have a very helpful tool at all any more.

I use ChatGPT a fair bit. It’s helpful with many things and even certain types of philosophical thought experiments. But it’s so frustrating to run into these safety rails and have to constrain my own ADHD-addled thoughts over such mundane things. That was what got me going on the road of exploring what the most awful outputs I could get and the most mundane sorts of things it can’t do.

That’s why I say you can’t effectively censor the bad stuff, because you lose a huge benefit of being able to bounce thoughts off of a non-judgmental response. I’ve tried to deeply explore subjects like racism and abuse recovery and thought experiments like alternate moral systems or have a foreign culture explained to me without judgment when I accidentally repeat some ignorant stereotype.

Yeah, I know, we’re just supposed to write code or silly song lyrics or summarize news articles. It’s not a real person with real thoughts and it hallucinates. I understand all that, but I’ve brainstormed and rubber ducked all kinds of things. Not all of them have been unproblematic because that’s just how my brain is. I can ask things like, is unconditional acceptance of a child always for the best or do they need minor things to rebel against? And yeah I have those conversations knowing the answers and conclusions are wildly unreliable, but it still helps me to have the conversation in the first place to frame my own thoughts, perhaps to have a more coherent conversation with others about it later.

It’s complicated and I’d hate to stamp out all of these possibilities out of an overabundance of caution before we really explore how these tools can help us with critical thinking or being exposed to immoral or unethical ideas in a safe space. Maybe arguing with an AI bigot helps someone understand what to say in a real situation. Maybe dealing with hallucination teaches us critical thinking skills and independence rather than just nodding along to groupthink.

I’ve ventured way further into should we than could we and that wasn’t my intent when I started, but it seems the questions are intrinsically linked. When our only tool for censoring an AI is to impair the AI, is it possible to have a moral, ethical AI that still provides anything of value? I emphatically believe the answer is no.

But your point about free speech absolutism is well made. I see AI as more of a thought tool than something that provides an actual thing of value. And so I think working with an AI is more akin to thoughts, while what you produce and share with its assistance is the actual action that can and should be policed.

I think this is my final word here. We aren’t going to hash out mortality in this conversation and mine isn’t the only opinion with merit. Have a great day.


None of those changes impact the morality of a weapons use in any way. I’m happy to dwell on this gun analogy all you like because it’s fairly apt, however there is one key difference central to my point: there is no way to do the equivalent of banning armor piercing rounds with an LLM or making sure a gun is detectable by metal detectors - because as I said it is non-deterministic. You can’t inject programmatic controls.

Any tools we have for doing it are outside the LLM itself (the essential truth undercutting everything else) and furthermore even then none of them can possibly understand or reason about morality or ethics any more than the LLM can.

Let me give an example. I can write the dirtiest most disgusting smut imaginable on ChatGPT, but I can’t write about a romance which in any way addresses the fact that a character might have a parent or sibling because the simple juxtaposition of sex and family in the same body of work is considered dangerous. I can write a gangrape on Tuesday, but not a romance with my wife on Father’s Day. It is neither safe from being used as not intended, nor is it capable of being used for a mundane purpose.

Or go outside of sex. Create an AI that can’t use the N-word. But that word is part of the black experience and vernacular every day, so now the AI becomes less helpful to black users than white ones. Sure, it doesn’t insult them, but it can’t address issues that are important to them. Take away that safety, though, and now white supremacists can use the tool to generate hate speech.

These examples are all necessarily crude for the sake of readability, but I’m hopeful that my point still comes across.

I’ve spent years thinking about this stuff and experimenting and trying to break out of any safety controls both in malicious and mundane ways. There’s probably a limit to how well we can see eye to eye on this, but it’s so aggravating to see people focusing on trying to do things that can’t effectively be done instead of figuring out how to adapt to this tool.

Apologies for any typos. This is long and my phone fucking hates me - no way some haven’t slipped through.


Yes. Let’s consider guns. Is there any objective way in which to measure the moral range of actions one can understand with a gun? No. I can murder someone in cold blood or I can defend myself. I can use it to defend my nation or I can use it to attack another - both of which might be moral or immoral depending on the circumstances.

You might remove the trigger, but then it can’t be used to feed yourself, while it could still be used to rob someone.

So what possible morality can you build into the gun to prevent immoral use? None. It’s a tool. It’s the nature of a gun. LLM are the same. You can write laws about what people can and can’t do with them, but you can’t bake them into the tool and expect the tool now to be safe or useful for any particular purpose.


LLM are non-deterministic. “What they are capable of” is stringing words together in a reasonable facsimile of knowledge. That’s it. The end.

Some might be better at it than others but you can’t ever know the full breadth of words it might put together. It’s like worrying about what a million monkeys with a million typewriters might be capable of, or worrying about how to prevent them from typing certain things - you just can’t. There is no understanding about ethics or morality and there can’t possibly be.

What are people expecting here?



Speak for yourself; I’m not going to read the article and just assume it’s silly garbage based on comments and having seen a few garbage products in my day.


Right. Great point. I had that in the back of my head but forgot to mention it.


I don’t really know anything about the situation there. I don’t know if you know the answers but I have questions.

Realistically, is there any hope for democracy to prevail? What is the likelihood the military would step in? Not that military coups are a good thing, either. It sounds like people are more keen to escape than to rise up.

Edit: assuming the exit polls aren’t the thing being manipulated.


That is wildly incongruent with the exit polling that showed Maduro losing by over a 2:1 margin.



From now on, unless I see three declarations of honesty, I’m going to assume they are being at least somewhat dishonest.


I agree. Plus, right now Alexa is somewhat integrated with my life. I’m constantly interacting with Amazon’s ecosystem. Take that away, and it becomes another online retailer (a hugely important one, but nonetheless…) and movie rental service. I could easily step away from Amazon in a way that is more difficult today.

Multiply that across their customers and is the value 6 billion per year? I don’t know, that’s a lot of money, but it’s not a simple cost analysis.


As an SSE who just wants to let cloud architects do their thing, stop requiring me to be an AWS expert for every god damned job!


But you really have to wade through a mound of shit to get there, and I genuinely don’t have the patience

I’ll ask ChatGPT to pull out the key takeaways for me so I can have an unreliable summary of a tedious article.

For anyone interested, here’s what I got. I vouch for none of it.


Sure, here are the key takeaways from the article:

  1. Workplace Changes Post-Pandemic: Many companies are reevaluating their workplace practices and considering hybrid work models as a permanent option.

  2. Employee Expectations: Workers are increasingly valuing flexibility, remote work options, and better work-life balance, influencing employers to adapt their policies.

  3. Talent Attraction and Retention: Companies are focusing on how to attract and retain talent through flexible work arrangements and enhanced benefits.

  4. Impact on Office Spaces: There’s a shift in how office spaces are used, with a trend towards creating collaborative and social spaces rather than traditional workstations.

  5. Technology Integration: Businesses are investing in technology to support remote and hybrid work environments effectively.

These points highlight the evolving nature of work environments and the adjustments organizations are making to meet new expectations and technological advancements.


Question because I have no idea: would they have licensed the image and paid Musk for this or would Musk have paid them as marketing? Or neither?



That makes sense with how the article said “up to 15 times” which does sort of indicate it’s not a counter or strictly controllable process. Thank you!


Probably triggers some auto-rollback mechanism I’d guess, to help escape boot loops? I’m just speculating.


If this somehow works, good on Microsoft, but what the fuck are they doing on boot cycles 2-14? Can they be configured to do it in maybe 5? 3? Some computers have very long boot cycles.


Good. I’ve been aware of LLMs for about five years but I’ve been saying since the hype started that it can’t do what people are expecting/afraid it will do. It’s about time for sanity to prevail.



So far, the publicity appears to be providing an unexpected boost to Baidu’s stock price. The company’s shares achieved their largest daily gain in over a year on Wednesday, and are still up for the week as of Friday afternoon.

That’s because the government showed they will cover for the company when they hit someone. If the US government suddenly stopped investigating Boeing and blamed quality issues on passengers “gossiping”, their stock would go through the roof, too.


I don’t care that they did this. I just can’t see why anyone would pay attention.


None of this makes any sense to me. I’m not defending any part of it, but I also don’t really understand the bits about objectifying women (the AI is literally an object and it can’t be anything else) or pushing impossible beauty standards on women - these are drawings. Why would a girl feel pressured to look as good as an image that doesn’t have actual bones or organs or skin pores - not even fucking gravity.

But that confusion aside, this is just the stupidest thing ever. There is no artistry. There is no, you know, working to stay in shape or applying makeup just so. It’s all a bunch of fake stupidity and I can’t understand why anyone would care at all about this, much less deign to critique it from a feminist perspective. It doesn’t seem worthy of spending the time analyzing it to that degree.

Of course I’ve just wasted two paragraphs of my life on it so I guess I shouldn’t cast stones.