A regular adult human have 600 trillion synapses( connections between neurons ), so to just record index of these edges needs like 4.3 PB(yep petabytes), it’s not even counting what they do, just the index(cause 32bit int is not enough.) And, just in case you don’t know, toddler have even higher connection count for faster learning until our brain decides that “oh, these connection are not really needed” then disconnect and then save energy consumption. It is really not in our reach yet to simulate a self-aware artificial creature, cause most animals we know that are self-aware have high counts of synapses.

And yes we are attempting those for various reason.

https://www.humanbrainproject.eu/en/brain-simulation/

Incidentally, for anyone interested in this topic in a fictitious setting, Peter F. Hamilton’s Commonwealth Saga trilogy has people uploading to a communal sentient “computer” called the ANA Advanced Neural Activity).

If you’re into the space opera sci-fi genre, there is no better series of books (including The Void Trilogy, excluding Misspent Youth, both set in the same universe), IMHO.

foo
link
fedilink
81Y

I knew there was a reason I saved all those IRC chat logs!

The short story in the form of a wiki entry MMAcevedo seems apropos to this conversation, especially the fictional uploader’s opinions on it:

Acevedo indicated that being uploaded had been the greatest mistake of his life

I hadn’t bookmarked a story in a LONG time, especially once I’ve read through from start to finish.

The Baldness
creator
link
fedilink
31Y

This is great. Thanks.

This sounds really fuckin cool.

Probably wouldn’t be able to replicate your beliefs/morals/mannerisms consistently enough for you to not question what it says and/or actually predict your complete internal through processes.

That being said, if you were to train the AI to be a kind of automated therapist as well as train it to speak like you, it could be useful for getting some thoughts unstuck if you’re in a rut, not sure if that’s something that is possible yet or not.

A real therapist might be better though.

@Lumidaub@feddit.de
link
fedilink
English
51Y

The AI would still need to understand feelings, at least in principle, in order to interpret your actions which are based on feelings. Even “I like having sex with her” is a feeling. A purely rational mind would probably reprimand you for using contraception because what is the point of sex if not making offspring?

The Baldness
creator
link
fedilink
English
11Y

I would think that “I like having sex with her” would be objectively quantifiable based on how many times it was mentioned versus other mentions of the person in question.

@Lumidaub@feddit.de
link
fedilink
English
41Y

At that point you could search your diary entries yourself to analyse the way you talk about her. Assuming of course you’re honest with your diary and yourself and not glossing over things you don’t want to realise - in which case do you really need an AI to tell you?

The Baldness
creator
link
fedilink
English
21Y

Those were just generic examples. More specifically, I tend to write in my journal when I have a problem I’m trying to work out, not when things are moving along smoothly. So I would expect the chatbot to be heavily biased that way. It would still be good for recognizing patterns, assigning them a weight, and giving me responses based on that data. At least that’s my understanding of how a GPT works.

@Lumidaub@feddit.de
link
fedilink
English
11Y

Yeh, I get that it’s just an example. But wouldn’t it be like that for anything you could ask it? It can only work with what you’re giving it and that data could be heavily influenced by you not wanting to see something. Or exaggerating. Or forgetting. A human looking at your diaries might be able to put themselves in your situation and understand, based on their own experience with the human condition, how you were probably feeling in a situation the diary entry is describing and interpret the entry accordingly, maybe even especially when considering other, seemingly conflicting entries. But they’re using “outside” information which an AI doesn’t have.

Don’t get me wrong, I’m not saying what you’re imagining is completely impossible - I’m trying to imagine how it might work and why it might not. Maybe one way to develop such an AI would be to feed it diaries of historical people whose entries we can interpret with decent confidence in hindsight (surely those must exist?). Ask the AI to create a characterisation of the person and see how well it matches the views of human historians.

I am so very much not an expert on AI and I hate most of what has come of the recent surge. And remember that we’re not talking about actual intelligence, these are just very advanced text parsing and phrase correlating machines. But it does of course seem tempting to ask a machine with no secondary motives to just fucking tell me the harsh truth so now I’m thinking about it too.

I think there’s an (understandable) urge from the technically minded to strive for rationality not only above all, but to the exclusion of all else. There is nothing objectively better about strict objectivity without relying on circular logic (or, indeed, arguing that subjective happiness is perfectable through objectivity)

I am by no means saying that you should not pursue your desire, but I would like to suggest that removing a fundamental human facet like emotions isn’t necessarily the utopian outlook you might think it.

Mate, maybe you should just go a therapist. That‘s their job, you don‘t need an AI for this.

Pretty much. This is far beyond what an LLM can do as well.

It might tell me that

IMHO an AI won’t be able to fix or cure all those feelings. You should see a therapist for this.

“I like having sex with her” would be objectively quantifiable

Again, I don’t think feelings are quantifiable, this is the main problem with AI.

Chat GPT can already be a pretty good tool for self-reflection. The way its model works, it tends to reflect you more than anything else, so it can be used as a reasonably effective “rubber duck” that can actually talk back. I wouldn’t recommend it as a general therapeutic tool though, it’s extremely difficult to get it to take initiative so the entire process has to be driven by you and your own motivation.

Also… Have you ever watched Black Mirror? This is pretty much the episode Be Right Back, it doesn’t end well.

It doesn’t end well.

Certainly true for the majority of Black Mirror episodes 😅

And the show is just phenomenal. I can’t think of any other show in recent years (off the top of my head) where I’m just in near constant awe of the writers, apart from Bluey. Watching either, my wife and I will often turn to the other at the end of an episode and go: “It’s just so fucking good”.

I’ve found that learning about and practicing DBT has offered me more of a skill to do this myself. I know what you mean about wishing you could see outside the frame of your emotions and past. In DBT, we have something called the “emotion mind” and the “reasonable mind.” But we need both in order to make decisions. Rationality is great, but emotion provides direction, desire, goals, and a “why” for everything we do. The idea is that when you use emotion and reason together, you can use your “wise mind” which can help you see outside your experiences and gain perspective in new areas. I think I know what you mean because I also crave further neutral 3rd party understanding on my past too, and use ChatGPT a lot for that myself. Thought I would just throw in a couple more cents if you hadn’t heard of the concept. :)

I think there’s an (understandable) urge from the technically minded to strive for rationality not only above all, but to the exclusion of all else. There is nothing objectively better about strict objectivity without relying on circular logic (or, indeed, arguing that subjective happiness is perfectable through objectivity)

I am by no means saying that you should not pursue your desire, but I would like to suggest that removing a fundamental human facet like emotions isn’t necessarily the utopian outlook you might think it.

jecxjo
link
fedilink
81Y

Unfortunately this setup will only get you to a very rudimentary match to your writing style and only copying from text you’ve already written. New subjects or topics you did not feed it won’t show up. What you’d get is a machine that would be a caricature of you. A mimic.

Its not until the AI can actually identify the topics you prompt, make decisions based on what views and how they relate to the topic that you’ll have an interesting copy of yourself. For example if you were to ask it for something new you should cook today PrivateGPT would only list things you current stated you liked. It would not be able to know the style of food, the flavors and then make a guess as to something else that fits that same taste.

ivanafterall
link
fedilink
31Y

Yeah, so the AI would STILL be very favorable about having sex with X, for example, because it’s trained on your writing/speaking/whatever.

“What do I feel about this?”

“Well, an average of what you’ve always felt about it, roughly…”

jecxjo
link
fedilink
51Y

Well sort of. If you never talked about dating for instance, and you then started taking to the AI about dating it may not put two and two together to get that it relates to sex. It wouldn’t be able to infer anything about the topic as it only knows what the statistically most likely next word is.

That’s what i feel like most people don’t get. Even uploading years and years of your own text will only match your writing style and the very specific things you’ve said about specific topics. That why the writers strike is kind of dumb. This form of AI wont invent new stories, just rehash old ones.

…oh…now I see why they are on strike.

…oh…now I see why they are on strike.

😆

It’s a very interesting thought, but it will always struggle to account for variables you can’t see.

It’s always going to be designed top down to approximate your own development as human from the ground up. I don’t douby AI as a feasible possibility, but I don’t think we’re headed for digital clones. They’re always going to have some amount of the creators ghost or assumptions in the machine.

I can see this a movie

The Baldness
creator
link
fedilink
21Y

You probably have. Someone else mentioned an episode of Black Mirror.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 144 users / day
  • 275 users / week
  • 709 users / month
  • 2.87K users / 6 months
  • 1 subscriber
  • 3.09K Posts
  • 64.9K Comments
  • Modlog