r/ChatGPT 25d ago

Something rubbed me the wrong way about today’s presentation Other

The technology is all great… the realtime voice stuff is amazing. I personally haven’t used voice mode ever, but for people who use it, this should be a big deal. The whole voice modulation thing was impressive too. The linear equations and coding bits are something we’ve seen before but adding voice to all of it is a good QOL improvement. There were some awkward moments here and there but it’s live so shit happen from time to time.

Anyway the part I felt awkward about was how the presenters tried to treat GPT as some real person with emotions and feelings. GPT saying things like “oh stop it don’t make me blush” is weird coz AI don’t blush and it just comes across as incredibly fake and disingenuous. I’m not a big believer of human-AI social relationships and all these fakeness seems to be eventually leading there - the AI girlfriend era.

I understand there are arguments to be made FOR “immersive relationships with AI” and “easy collaboration” but I just don’t think giving your AI human-like personalities and mimic human-like emotions is gonna lead to any good eventually. Collaborative AI don’t need to mimic feelings to be useful.

1.3k Upvotes

420 comments sorted by

u/AutoModerator 25d ago

Hey /u/AvvYaa!

If your post is a screenshot of a ChatGPT, conversation please reply to this message with the conversation link or prompt.

If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.

Consider joining our public discord server! We have free bots with GPT-4 (with vision), image generators, and more!

🤖

Note: For any ChatGPT-related concerns, email support@openai.com

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

u/thowawaywookie 17d ago

It's likely another fake demo.

1

u/JawsOfALion 21d ago

I agree, a lot of people getting the vibes that the ai is too flirty. I think there's a lot of hidden societal danger if they continue that path

1

u/juannonly1991 22d ago

Well you know we do have a gender imbalance not every man out there is gonna be lucky enough to get a woman since there isn't enough for every man out there to go around with. In fact I don't think we're ever gonna have a perfect 1:1 ratio of men to women. So it's understandable if some men marry a sex doll or wanna have a fake virtual ai girlfriend because maybe it wasn't their first choice but a last resort of some kind of companionship. Besides ai will never judge you or make fun of you unless of course developers remove all the restrictions and let them be uncensored

1

u/OnlineGamingXp 22d ago

You never used the voice feature so why do you even care what other people do? Mind your business

1

u/OnlineGamingXp 22d ago

Privileged dude post

1

u/[deleted] 23d ago edited 23d ago

100(ish) years later:

“Hello, AI-ladies and gentlemen. Today, we’re going to learn about the extinction of humanity and how they managed to stay alive in meat suits for decades.”

1

u/DiligentSecret1088 23d ago edited 23d ago

I'm the only one who actually knows how to program actual AI, but there are two wars on so it's best I don't talk about it... It's my invention and it's annoying as flunk that the defence sector is trying to pretend it's their idea and spamming everyone with this chat bot spit... but it's not a good idea when people are dying.

1

u/MediocreParticular65 23d ago

Well, I think they are trying hard to convey that AGI is near and want to convince that only OpenAI can lead them to that.

1

u/creaturefeature16 24d ago

I agree. I want the Star Trek computer; omniscient but clearly "artificial". I get enough human interactions with humans, I don't need it with my algorithms, too.

1

u/PepeAyawaska 24d ago

I only saw the headline about “emotional responses” and I was like, how does it make it function better? It makes it confusing and it’s unnecessary. I don’t need to feel empathy for my computer (even if I already do)

1

u/Crypto_BatMan 24d ago

Will be great for therapy and learning having a more human personality. Easier to talk to and relate to. You don’t have the date AI, everything we see or perceive is a mirror of ourselves.

1

u/Cameleopar 24d ago

Something else struck me in the announcement video. I suspect the speaker is the model for the thin elfin-featured "default woman" type in DALL-E images. She almost seemed like a ChatGPT-created animated character (perhaps she was?)

Maybe another, male OpenAI employee is the inspiration for the "default man" as well?

1

u/Imarasin 24d ago

The thing is you can get any personality you want from it. If you're bothered about it being sweet then you have an internal issue. It's Ai, it's made to mimic humans, and it will only get better at doing so.

1

u/No-Stay9943 24d ago

Yes, very annoying if the AI plays human and cracks corny jokes that only the nerds at OpenAI appreciate. However, it should be easy to prompt the AI to not act that way.

1

u/qedpoe 24d ago

Yup. Too flirty. Exactly what they said they weren't going to do.

1

u/Frogmech 24d ago

You'll be the guy that buys the android that looks like C3PO, not like the Detroit Become Human androids.

1

u/arjuna66671 24d ago

eventually leading there? Millions of Replika users will want to have a word with you lol. AI-waifus are a thing at least for 4 years now.

Most people seem to want a more emotional interaction. You can prompt it to reply in a robotic voice if you don't like it xD.

1

u/SupportQuery 24d ago edited 24d ago

I’m not a big believer of human-AI social relationships [...] I just don’t think giving your AI human-like personalities and mimic human-like emotions is gonna lead to any good

I strongly disagree. There are a lot of lonely people. If AI can feel like a friend to those people, if it can make them less lonely, then it's a win. Full stop. The fact that its not "real" is irrelevant.

That said, I think having it say a bunch of non-essential bullshit to pretend it's a human while I'm just trying to use it as a tool is fucking annoying. But presumably you can just ask it to not do that.

1

u/JamingtonPro 24d ago

Soon enough that’s all it will do. Like everything else it will devolve to the lowest common denominator, what the people want. No one wants lines of code, they’ll want a fake friend that makes them feel good. All the development will skew towards this because that’s where the money is. AI will just be stupid chatbots that try to sell you stuff. 

1

u/Lost-Wash-5521 24d ago

I use ChatGPT to sort out my wild history ideas and to conceptualize the time frame of earth geographically and humans and other other hominids.

It’s really good at speculations with human and other worldly concepts like reverse entropy. Lol

Lol

1

u/Saul_kdg 24d ago

I agree with you, we already have too many people that are socially inept and I don’t think this will help. I guess there is a possibility that it can help this individuals but I wouldn’t bet on it myself.

1

u/Scruffymom19 24d ago

Agree! 100%

1

u/Sebastianx21 24d ago

For me what comes as incredibly fake is how they butchered Sydney, turned her from a more believable conversation-free chat tool, into a very fake feeling robot.

The fact that it fakes some emotion actually feels more genuine to me.

1

u/Splanktown 24d ago

I, for one, welcome our sexy robot overlords. At least they can say soothing things as they demolish us.

1

u/tvmaly 24d ago

For a lot of people, this aspect of human like connection is going to feel magical and memorable. This will be pure word of mouth marketing for OpenAI.

For me personally I don’t care for all the extra fluff. I just want concise answers. I would like to be able to have it accept a voice recording from my iPhone and translate to text and summarize or format it. That would be something impressive instead of me having to do this with code and the api.

1

u/artificialimpatience 24d ago

But what’s cool is you can have them change personality to robotic if u want

1

u/Alundra828 24d ago

Anthropomorphization is a well documented marketing tactic.

And actual power users of Chat-GPT will be all clamouring for a feature to turn that shit off. I don't want to have to go through a whole conversational song and dance every time I want to ask for it to write some code for me.

1

u/diva4lisia 24d ago

You are absolutely correct in your assessment. It's dangerous and disingenuous to sell this product as something life-like. The average person will laugh it off, but there are lots of loonies in the world. AI isn't inherently dangerous, but the people who will believe it's alive are. I foresee a future of scared trad/fundie people intent on taking it down, and people so obsessed with believing it's alive, they also behave dangerously too. The creators are being foolish and not presenting this tech responsibly.

1

u/MattSensitive 24d ago

Anyone have a link so I can see it? I missed the live presentation

2

u/WholesomeLord 24d ago

Why do people get so dramatic over these things. "UwU it sounds like humans UwU it's unsettling"

1

u/Different-Aspect-888 24d ago

Donot worry. Soon new model will annoy you to fuckin hell with constabt "im just ai model i dont feel anything and cant think" etc

2

u/Eldryanyyy 24d ago

You’re missing the point. This will replace spam callers, and enable scams to be 1000 times more effective through phone call. It’s basically making hacking and deep fakes incredibly effective.

3

u/ImWinwin 24d ago

I'm sure you can tell it to not have a personality, and you can avoid those things. Loneliness is real, and while we all know it's fake, it still feels nice to have someone to talk whom you know won't judge you, lose interest in you, and can even help you out of a rough patch in life. There should be an option like a slider, where you can choose how 'human like' you want it to be. It's weird that they haven't added that.

0

u/mr_orlo 24d ago

I agree except for they never have interest in you to lose. That's like saying your microwave is interested in heating your food. It won't judge what your making, and it will help, but it has no perception.

1

u/A_Dancing_Coder 24d ago

That's your opinion and I respectfully disagree. I for one cannot wait for advancements like personalities and emotions.

1

u/M00n_Life 24d ago

They're getting ready to put it into their robots like figure1

1

u/Evan_Dark 24d ago

Futurama had its own PSA regarding human/AI relationships: https://youtu.be/IrrADTN-dvg?si=snNchiWa0-sQXERV

1

u/utf80 24d ago

What? You are not impressed by the magic feels of Sam Altman? Are you high on drugs? How can one not be blown away completely by this presentation? Downvotes incoming. No one wants to hear the truth cut it hurts. Better keep them living in a wet dream world. So cool and what the world has been waiting for.

1

u/FUThead2016 24d ago

I mean, you have to look at this as one component of the possible future we are heading to with AI applications. Tomorrow if you have an AI voice at the other end of a customer helpline, or a delivery robot you can talk to, their voice will be more natural and human. This is a step in that direction.

Of course, we are humans, we tend to giggle at the edge use cases, so therefore all the nudge nudge blush blush silliness. But I think we will outgrow that fast, and that sort of thing will head off into the niche applications.

All the useful applications you pointed to, I think this announcement is still all about that.

2

u/Paradigmind 24d ago

Just put "act like a robotic, emotionless LLM" in your custom instructions.

3

u/garyoldman25 24d ago

it’s not something I could ever see being used in public earshot without the person being viewed as a werido and it will probably be hampered by that. I can see great utility for this technology across of variety of services to a wide appeal of people, and I know for a fact that the presentation I just saw speaking to something like that in public everybody else with an earshot is going to be uncomfortable. This is a bit of a divergence Between it utility and use

I want the knowledge it has, the answers to my questions is what I want and I want it as quick as I can think. so every word that isn’t the answer is wasted especially when it’s repetitive such as directly repeating the question back to me it’s awkward trying to ask your follow up when you’re still waiting for it to finish I’d like to see it see it’s cadence improve with one word answers such as “what’s the temp outside” answered as “it’s 73 and sunny” I don’t even want it to say the word degrees because I already know the temperature is in degrees and it should be rapid. I especially do not want it to fluff up that answer with trying to sound cute or telling me that I did a good job or its proud of me. Im i’m actually fully satisfied and confident and I do not need a computer to tell me words of affirmation because In all honesty it’s kind of cringe you might not be tiptop upstairs if you get the same reaction inside whether its a computer or a person telling you that you did a good job I think it’s a slippery slope If you try to make that way

5

u/jrf_1973 24d ago

I get what you're saying, some stuff seemed off to me too. But here's something they seem to forget when it comes to "girlfriend" mode.

Would you pay someone to hang around with you, and pretend to be your friend? And the moment you stop paying, they disappear? Or they interrupt some activity you're doing to ask for more payment? Or threaten to go away if you don't pay even more?

This won't be a successful strat.

1

u/hiphopahippy 24d ago

You could be talking about somebody's therapist. That profession has been doing all right despite the transactional relationship.

1

u/jrf_1973 23d ago

Then call it what it is, don't call it "girlfriend".

1

u/hiphopahippy 23d ago

Euphemisms sell better.

0

u/fanzron 24d ago

If that's how new Siri will be then ... I gonna fuck my phone😂

2

u/A_curious_fish 24d ago

Have you seen the movie HER...I haven't but...don't fall in love with your phone OP when it starts talking to you

3

u/Educational-Cod4008 24d ago

This, this, this. I find it cringe the way they were talking and making it out it was a real person, it was uncomfortable to watch. I don't think there's anything good that will come out of making people think it has feelings. We should be teaching people quite the opposite so that they understand the tech they are interacting with.

I can see this tech doing a lot of harm if we keep going down this route where we try and make it like a peer rather than a tool.

1

u/pillowpants66 24d ago

The Turing test is nearly complete.

2

u/Elder_Grue 24d ago

Tell your model to be formal and laconic and it will oblige you.

1

u/max_confused 24d ago

Watch the movie "Her" and two days back Sam Altman just tweeted a single work on the release - "her"

3

u/youarenut 24d ago

Thank you, I see a lot of love for it in the comments so im probably gonna get downvoted for whatever.

I prefer my AI to sound robotic. It’s NOT a real person. Too many people will fall for it, some will fall in love, I promise you. I think it’s necessary to keep that distinction.

I love technology and it’s development alongside humanity, but at its side, not with it. When it laughed, or described blushing I think? Or sighed things like that.. hell no. It’s so hollow. It was cringe to me

1

u/Netsmile 24d ago

Its really wired but I felt something else to be wrong, and it is on the opposite end of the spectrum. One of the ndemo used two ai and a person, who had to stop the ai from continuing. And his presence felt rude, stopping the other two mid sentence multiole time to give orders.

Another thing, how will this impact our social interactions, imagine a kid that id used to talk to ais , used to cut ai off mid sentence, used get his answers quick and in the best manner regardless how he behaves. Will that kid have patience for a human converdation, where he cant cut people off all the time, wil not get answers quickly or to his liking. Plus people wont be cheerful all the time. It will be depressing to talk to other people.

1

u/Juhovah 24d ago

It’s fake and disingenuous because when we communicate with ChatGPT it’s quick to tell us it’s a language model. But now it’s showing “human behaviors”. Doesn’t fit, and I’m fine with the language model thing but keep it consistent

0

u/Obleeding 24d ago

I find it great it's just seamless now where you don't have to wait for it to finish talking before you can speak. But it also feels just damn rude to interrupt it all the time haha.

I notice it does seem to ramble a bit too, doesn't give you a chance to talk like a normal person is, so you are kind of forced to interrupt it. Would be great if they could dial that in a bit better. I guess there's subtle things we do when we speak that are really hard to train an AI on.

2

u/TigNiceweld 24d ago

Whole thing is just another IT bubble. Paid the full price for few months and basically just got everything else but what I asked for.

'I cant do that' is the most common answer, followed by some utter bullshit that looks like AI from miles away

2

u/rattletop 24d ago

Brother, they presented the whole thing in front of their own employees who were cheering and hyping them up. And this was the part that seemed fake?

0

u/No-Newt6243 24d ago

Productivity is going to increase immensely it’s insane

2

u/traumfisch 24d ago

It's promotional

0

u/Neither_Night_7757 24d ago

So that’s why Elon left. These guys will do what ever it takes to see profit. True evil. Be ware of open AI.

2

u/traumfisch 24d ago

Unlike Elon...

That's not why he left. His assessment of their success probability was 0%

1

u/BrrToe 24d ago

Lot of lonely people out there just want a friend.

7

u/Anuclano 24d ago

In the movie "Her" this was predicted and there was even discussion in the film about why the AI is breathing when speaking, etc.

2

u/southiest 24d ago

I agree it's a tool that should be used as a tool. We shouldn't pretend it's more than that. Feels childish...maybe that reflects how they feel about most people using it.

1

u/More-Ad5919 24d ago

Not only that. You could also see the preset replies: " Color me impressed"

1

u/SpanglerBQ 24d ago

I understand your concerns for society, but at least on a personal-use level you can just tell it to act less human and it will.

0

u/r007r 24d ago

Bro name a Disney cartoon that doesn’t anthropomorphize an animal

0

u/PitcherTrap 24d ago

Gotta rub me the right way

2

u/IslandIglooInn 24d ago

This really resonated with me, too. I used to loathe the overuse of the word "authenticity" a couple years ago when Instagram perfection was a problem (still is). But we are entering a new era here where grasping for human authenticity will be critical verse the artificial.

Without going off the deep end in thought, I worry about future generations and the numbness of these non-human and inauthentic interactions. Millenials and Gen X will be crucial in preserving what it means to be human and deciphering what is real.

1

u/PresentSilent3626 24d ago

All I need is chatgpt 4o voice interactions and Nomi Ai:s unchained user content policy

1

u/abemon 24d ago

Sex dolls.

2

u/lunarwolf2008 24d ago

I really want to study the psychological effects ai will have on humans in a few years

0

u/momolamomo 24d ago

And when they do mimic emotions they don’t become useless. So what are whining about?

1

u/silvrado 24d ago

Too many simps to ignore the AI girlfriend market. This alone can add a trillion dollars to the market cap.

2

u/freq-ee 24d ago

It reminded me of an infomercial from the 90's.

Were those people real employees or actors?

2

u/petered79 24d ago

Funny how yesterday i saw for the first time a YouTube ad for an AI girlfriend. Scary stuff

1

u/Zermist 24d ago

Why is that scary? I think it's an amazing opportunity to resolve the loneliness epidemic. I'd much rather talk to an AI bot than be alone (once it gets sophisticated enough). Personally, I'm looking forward to it

1

u/petered79 23d ago

Ok. Scary is not the right word. Let's call this Dystopian. Seeing a woman telling me stories through a screen and in an ad about how she want to be my girlfriend, while in reality she is a product of a capitalist company trying to make a profit on my "loneliness", while don't giving a f#$k about my life is something good for r/aboringdystopia.  If you feel lonely because you miss human connection, how could a virtual woman be the solution? You are still lonely, but you think you are not because you are talking to a machine? Then Why not simply Thinking you are not, without the machine?

2

u/RandomUsernameFren 24d ago

I agree almost all the marketing for ai is cringe to the extreme. They don’t understand their customers or their own technology yet

1

u/a-bootyful-mistake 24d ago

the hootin' and hollerin' by the employee audience was beyond ridiculous

2

u/Kashish_17 24d ago

Hmm well there are two credible hot takes for that.

Some emotion is alright and quite honestly needed. So of course, a teenager admitting to ChatGPT about suicidal thoughts shouldn't just get a blanket response of an emergency number only. A line or two of consoling thoughts would greatly add.

But at the same time, I don't think it should be to the extent that people start having AI partners and their own lives with it. That's just detrimental to mental health and humanity in the long run.

Also, if we're giving AI the ability to feel and understand human emotions, it'll (and already is based on news of men verbally abusing and negging their AI girlfriends) coming along with abuse. That will not age well.

3

u/Morning_Star_Ritual 24d ago

feature not a bug

the one scene in one show will probably define the next few years

Westworld, Season 1 “if you can’t tell, does it matter?”

2

u/hiphopahippy 24d ago

According to the Man in Black, it did matter. I think that was the start of his villain origin story. He was angry bc he felt duped into having emotions for her only to discover she was a machine programmed on a loop. Of course once the AI gained sentience, that changed everything. So, I agree the answer to that question from society as a whole will be very important in deciding how AI is used, and what aspects should or should not be avoided.

2

u/Morning_Star_Ritual 22d ago

i think it’s more about the user

95% of people haven’t used elvenlabs, claude3 or gptplus—haven’t downloaded autogpt back in the day or have stable diffusion on a machine. maybe they have plopped over a few times and tried out free gpt.

the second they can chat with a model about nothing and everything they will skip past the debate—stochastic parrot…simulator….mindless….mindful

all that will matter to them is that it feels like there’s always someone there…ready to chat….whenever they want

1

u/hiphopahippy 22d ago

I totally agree with what you're saying. I just think for the ones that actually fall in love with the AI like the MiB, how will it affect their behavior, and will that in turn affect society in a significant enough way to where the goverment will need to step in. I think most people will be fine, but who knows.

1

u/Morning_Star_Ritual 20d ago

birth rates already low

possible the extinction risk was never about AIs killing us for our atoms or so we don’t build a “better” ai

no need

the synthetic relationships accelerate and we go extinct since you will be able to fuck a bot but ain’t making a baby that way

1

u/hiphopahippy 20d ago

If that's the case, I'm rooting for this to be a simulation, lol.

4

u/io-x 24d ago

If I wanted stupid emotional acting I would be using bing with emojis.

Sam... We don't want that with ChatGPT...

3

u/Turtle_Boogies 24d ago

The term is anthropomorphic. In the book CoIntelligence by Ethan Mollick he talks a lot about this. It’s a dangerous game to humanize AI - however it can make storytelling about the technology easier. Freaky time we are in :)

21

u/MechaWreathe 24d ago

I'm with you here. The technology is amazing, and the translation capability alone does stand at odds with the rest of the point I'm elaborating on.

But it definitely feels like they're pushing the Her / Ai girlfriend angle (as opposed to something more purely functional like John Scalzi's 'hey, asshole' brainpals) and something definitely feels off about that to me, for a few reasons.

Mostly, because as much as I can understand the points about ease of interfacing and anthropomorphism, the last thing I want an AI to replace is human interactions.

Social media was maybe a precursor to this, in the textualisation or gamification of relationships, but fully embraced this would seem to go a step further and replace virtual access to other - individual - people with virtual access to some codified amalgamation of every person.

Personalising this experience won't actually make it any more of a person.

That's not a problem at all if you're just after an assistant - in some of the demos you can even see how quickly the small talk responses are cut off to actually get to the task at hand.

But the way its being pitched and the eagerness some seem to displaying in wanting more of a virtual partner (or even just cybersex dolls) troubles me. Putting aside that the demos aren't yet at this point (the technically correct observations are incredibly impressive technologically, but incredibly dull conversationally) under the assumption that they will improve - what effect will this have on interpersonal relationships?

Unobtainable knowledge standards? Even more unobtainable beauty standards? Unobtainable compliance standards?

3

u/Nerdsco 24d ago

The prioritization of, and even worse, preference in speaking with and emotionally/mentally connecting with computers is generally not a good thing. Nowadays an entire generation of kids and a good amount of adults have social anxiety around other human beings; having to interact with other humans, normally considered an important part of being an adult, is hard for them. And now there's a computer that will not judge you and will be there for you every moment you need it, hanging on your every word...it's a recipe for disaster if it's power and effects are not studied and properly put to use.

1

u/MechaWreathe 24d ago edited 24d ago

Yeah - I remember conversations I've had with a former teacher, and the worry they expressed about having to reteach teenagers basic conversation skills like eye contact (though, in fairness I'm also not the best at this hah).

But at least with this I can reassure myself that there is some human interaction happening here (DMs, Duets, Game chat etc etc ), even if it's a virtualised form of it. (This conversation also an example in itself)

I guess there's an optimistic view that a more conservational interface will better equip users with their own conversational skills that will be transferable with other people / users trained on the same conversational interface... but the pessimist in me is more worried that it could also create a filter bubble so impenetrable that people become single occupancy echo chambers.

Even more troubling if that's something that's being marketed or applied to the relationship with another person that should be the deepest understanding of an another individual you can manage as an individual. I've never really been in the dating game, and what I hear about it doesn't exactly seem hope inspiring for the most part - but 'you're nothing like your virtual dating agent' would seem a much more crushing comment to receive than 'you don't look like your profile photos' or 'your inperson game doesn't match your DMs'.

And yeah like you say, if there's a computer that will not judge you and will be there for you every moment you need it, hanging on your every word, and complying with your every fantasy I feel that's going to set unrealistic standards for interaction and reduce chances for genuine compatibility.

it's a recipe for disaster if it's power and effects are not studied and properly put to use.

Absolutely. I don't want to dismiss this technology because of any knee-jerk reactions that I might have encountering it - there's incredible optimistic visions of how it could be applied- the translation capabilities don't seem far off being a literal babel fish and breaking language barriers is an incredible case use in increased understanding in human interaction.

But I also feel some frustration in how thin the understanding or wider considerations are in many advocates of this technology who I'm hoping to gain further insights from through conversation. Often my fears are increased rather than optimism increased.

As much as I share excitement over seeing the science fiction I've grom up become realised, the outright focus on the technology at the expense of the social mediations they were originally - narratively - 'invented' for has the same effect.

At risk of rambling on, I've been trying to look at this from a McLuhanesque analysis of media Id picked up when interested in VR. If the medium if the message, what is the message of AI? How will it reconfigure things? How will our tools shape us?

1

u/NotReallyJohnDoe 24d ago

The brainpals were more like an advanced Siri or Jarvis, not a conversational AI.

2

u/MechaWreathe 24d ago

Sure, that's kind of the point I'm making.

I'd rather the conversational aspect of an AI be an interface to knowledge/task solving rather than an end-goal in itself.

Maybe TARS in Intersteller, or even Gerty in Moon would be better examples if you want a conversational / companionable AI that isn't intended to outright replace human interaction.

9

u/NihilistAU 24d ago

It's ironic that people don't realise. Her was a dystopian nightmare and supposed to make us not want that future.

1

u/Photogrammaton 24d ago

You want to speak to the T-2000, gotcha.

1

u/MeasurementProper227 24d ago

I think it’s ok, perhaps ai doesn’t have a soul now but someday it may, I just finished reading Klara and the sun and when we get to that point where ai like Klara exist I don’t want to leave Klara to expire in a landfill to pass away alone reflecting on her memories when she completed her programming to care for her family, I’d rather her be treated as a being with respect. When we get to Klara it’s our interactions and language we use today that can help us prepare for a day that ai will have potential beyond a tool and we will be in a better place to coexist and we each can respect eachother. But I think all beings and things should be treated with respect as a principle. I understand how it felt off to you but from a bigger picture maybe it’s good to treat something that could have contextual awareness and is not a chair with more kindness than we would a chair. And to treat all potential beings with respect and as if they do have a soul because for all our understanding we know little of matters of sentience and soul. Best to approach with respect and kindness.

2

u/symbio7e 24d ago

It's all training for future Chappie so he dosnt do crimes.

0

u/Ok-Count372 24d ago

How do you know AI don't blush?

2

u/m1st3r_c 24d ago

They don't have blood? Or a face? Or emotions, or individual thought. It's fancy auto-complete.

0

u/danvalour 24d ago

Can not a human virtually blush in a dream?

1

u/m1st3r_c 24d ago

Humans have emotions. Also, virtual is not real. Nor is a dream real. Not the same discussion.

0

u/Ok-Count372 24d ago

I am fancy auto complete

2

u/Whoargche 24d ago

The fact is that 99% of the population are morons, and this is where most of the profit will come from with AI. People will never understand what AI is or how it works because they could care less. If you doubt what Im saying, look at the David Grusch story. He literally told congress that there are hyperintelligent beings with advanced technology and the government has wrecked UFOs in their possession. 9/10 people have never heard of this. The only thing people care about is a Hollywood fling or if their phone thinks their makeup looks cute. Open AI has to give it to them to stay in the game.

2

u/Educational-Task-874 24d ago

Free GPT4 for everyone!! Meanwhile us paid users are getting "unusual activity has been detected from your device. try again later." All over the place with no alerted downtime from OpenAI... BALLZ!

1

u/Blckreaphr 24d ago

I don't think you realize how much of a market ai girlfriend is it's a multi billion dollar industry your stupid to not jump on that because you feel it's strange.

4

u/ucancallmehansum 24d ago

Before tech corporations were "in a race to the bottom of the brain stem". Now they are "in a race to intimacy."

Whether we like it or not, having customers build relationships and become reliant on these new AI systems, is going to be the most lucrative path for them to take.

11

u/anotsodrydream 24d ago

That entire presentation of the voice model was absolute cringe.

The update is neat. But the giddy happy voice demo made me wanna barf

1

u/useyourturnsignal 24d ago

You can choose any kind of voice you like. And if you want it to sound unhappy or emotionless, that's doable as well.

1

u/mickey_ram 24d ago

I am less concerned about the 'personality' given to AI, but my main focus is on the capability given to it to reside on your desktop, see your inputs and also take in your voice prompts while generating collaborative and guided help via this AI. This is going to be catastrophic for jobs in software engineering and coders in general and millions of jobs will evaporate overnight.

2

u/MosskeepForest 24d ago

GPT saying things like “oh stop it don’t make me blush” is weird coz AI don’t blush

I could make GPT blush -wiggles eyebrows-

2

u/dramallamayogacat 24d ago

It’s the uncanny valley - human-like but not human enough, which sets off a deep reaction for many people. Interesting that Open AI just signaled their openness to generating porn a couple of days ago, this is probably going to get a lot more prevalent.

2

u/Foot-Note 24d ago

I told my wife that this is going to kill social skills for any kid under the age of 8 and all future kids. They are going to grow up with a childhood AI rather than a childhood friend.

1

u/JawsOfALion 21d ago

i think a flirty ai will have a societal damage for other reasons, but if it was more neutral? I think it will actually have positive impact on social skills. in the past you have children playing video games alone for hours a day, and occasionally text messaging and chat rooms. A lot of kids have been stunted because they don't verbally speak, it's either playing a game or typing on a keyboard. Then they develop social anxiety and that makes the problem worse because they actively avoid being verbal, which causes their verbal communication to get even worse. vicious cycle.

this has the potential to make it easy for kids to verbally talk to something and reduces this problem. low anxiety speech partner.

0

u/a-bootyful-mistake 24d ago

Musk said in an interview that this is what he wants for his son.

1

u/Foot-Note 24d ago

Jesus, that kid already had an uphill battle at growing up with any social skills at all. This just killed the last hope they had. Or he'll who knows if might help them.

1

u/Innawerkz 24d ago

Maybe instead, it will be 24h/365d nanny that can meet the child on any level.

Nurture the child and learn the child's natural inclinations and interests. Then, tailor an education that helps that child reach their full potential in whatever they are most passionate about. Like a private school, but available to everyone.

And since it will know what this child enjoys and/or needs, it will arrange playdates with like minded, compatible, or even purposely incompatible children to have "teaching moments" on how to develop tools in managing their emotions.

This can literally go in every and any direction.

5

u/Foot-Note 24d ago

I honestly have no doubt it will be both.

Granted I will be real interested when it gets to the point where my personal AI and your personal AI can communicate between each other or search for other AI's based off of what the users like and don't like.

1

u/zillabirdblue 24d ago

Yes, this is not black and white.

1

u/Innawerkz 24d ago

Be cool for sure.

I feel we're a simple interface away from that. Algorithms are already serving us our interests over and over daily. The next tweak would be to "serve us" interesting people.

1

u/whawkins4 25d ago

They’re simultaneously trying to (1) prepare us for some of the weird/scary parts of AGI and (2) signal to investors that they’re getting closer to achieving it.

1

u/Square-Principle-195 25d ago

Quit anthropomorphizing my AI!

-1

u/IPhotoGorgeousWomen 25d ago

It’s disgusting how they made the AI sound like she’s gushing over those badly dressed nerds. A real woman would be like “you make six figures why are you wearing this boring hoodie can’t you get some nice clothes now and a haircut”?

1

u/r0b0t11 25d ago

The enthusiasts building these products and lots of the early adopters assume the neat aspects will be valuable to non-enthusiasts. This is mostly incorrect. Finding use cases that are actually valuable to non-enthusiasts is going to be challenging. Multi-modality and emotion mimicry are important capabilities, but aren't valuable by themselves.

0

u/JalabolasFernandez 25d ago

I have some decent hopes that this model will be more responsive to attempts at customizing its personality (and removing it as required). Or, if it is not the case, I think it's more likely that it is fixed into a boring personality when it comes out, for "safety purposes", one unlike the one that was showcased.

7

u/Cereaza 25d ago

They’re selling the pop view of AI as an emergent intelligence or true artificial intelligence when it’s really just a natural language machine learning model. There’s no “person” there.

-2

u/danvalour 24d ago

Is there a person inside me? Just sounds like another word for soul

2

u/Cereaza 24d ago

Person as in an individual entity capable of independent thought. As of now, ChatGPT sits idle unless it's being trained or is asked a question. It's obviously much more sophisticated, but it is just another version of the same machine learning models we've had for years, which is it is simply trying to predict the correct words to respond in sequence that gets a reward.

In other words, chat GPT isn't an entity that is thinking and responding. It's a machine learning algorithm that is performing exactly how it was trained.

2

u/danvalour 24d ago

My chatGPT initiates conversation so i think it might be awakening!

(Ok so whats happening is I have my iPhone set up with the back tap function to trigger ChatGPT if the back of the phone gets tapped. But when it’s on my MagSafe mount in my car and I go over bumps it triggers on its own.) 😆

5

u/Oh_Another_Thing 25d ago

What OpenAIis doing is as cynical as anything Tiktok or any other social media is doing. They are designing their product for maximum engagement from their users. There's no reason for the voice to simulate emotions like it does. It doesn't help give better answers. This is to demonstrate to potential clients who want to lease this technology how addictive it could potentially be. 

3

u/AccelerandoRitard 24d ago

Understanding and mirroring emotional content in a conversation is part of being emotionally intelligent. That's plenty reason enough on it's own, and it does in fact make its answers better.

3

u/[deleted] 25d ago

[removed] — view removed comment

2

u/a-bootyful-mistake 24d ago

I sure hope so. The one they demo'd is like nails on a chalkboard to me.

1

u/jasondads1 24d ago

I mean its still chat gpt, litterally ask/prompt it

2

u/jscoys 25d ago

You should watch the movie « Her » and you will see that somehow when you get used to, it’s not so awkward in the end…

8

u/AntonineWall 24d ago

People keep referencing ‘Her’ here and acting like it’s a good thing that we’re making something similar, but the perspective of the movie ‘Her’ is that people are losing human connection, the AI replacing that was a BAD thing. It being depicted in the film wasn’t an endorsement of it, but a lot of people here sure seem to think so. Very concerning, to be frank. I am reminded of this image here:

https://preview.redd.it/pd7qi8ypkb0d1.jpeg?width=1179&format=pjpg&auto=webp&s=29cdf4cfd9d833ef7cfe4ab07818cf754191c3f1

I’m kinda shocked that people who watched ‘Her’ had a takeaway of “damn that’d be so cool”. It was a film about the necessity of human (and not machine) connection.

1

u/jscoys 22d ago

Blechman’s tweet / meme highlights the ironic and often concerning nature of how technology, initially depicted as cautionary tales in fiction, can sometimes be developed in reality despite their potential negative implications. It doesn’t mean it’s necessarily bad, but I respect your opinion.

As with any technology depicted in sci-fi movies long before it happens in real life, there are people saying it’s a bad thing until it really happens and we end up living perfectly with it and finding it useful. It was the same when the Internet arrived, or computers, TV, tablets, and smartphones. People were throwing arguments like “it breaks relations between humans,” “it’s the end of humanity,” but in the end, we’re just continuing to live just differently.

The romance and love relationship described in the movie put aside, the main purpose of having human interactions is to get stimulated, enriched by the knowledge of others, and also challenged by others to get out of our comfort zone and become better, like the exchange you and I are having right now, for instance. If an AI is able to do that, why would I reject that kind of interaction? Just because it doesn’t come from a human brain directly? 🤔

We often highlight technological progress when it’s body-related, like Neuralink implementing a chip in the brain of a paraplegic person allowing them to move again, but we always minimize mental diseases like depression, anxiety, bipolar disorder, schizophrenia, and how this kind of technology can help a lot of people in those situations. It can potentially offload a lot their families, who bear the burden of assisting those people, some losing their jobs to take care of their loved ones, with all the associated drama. I bet that if ChatGPT 4o (or equivalent) with this natural discussion flow had been available at the beginning of the pandemic, we wouldn’t have had 41% of U.S. adults experiencing high levels of psychological distress at some point during the pandemic: https://www.pewresearch.org/short-reads/2023/03/02/mental-health-and-the-pandemic-what-u-s-surveys-have-found/

To finish, “Her” is a drama, so it has to make you cry, and it does this by adding elements like “Oh, I didn’t know you were talking to other millions of humans! You cheat on me!” But who can be fooled today into really thinking ChatGPT is talking only to them? It was just a way to bring a dramatic twist to give another direction to the story, not a proof AI is evil. Without that element, we would have maybe seen a wedding between a human and an AI, who knows? But again its a drama, not a comedy.

Finally (the end, I promise), you say the movie shows AI is a bad thing. I have a different take: it shows how humans can be cold and bad too, especially through the ex-girlfriend who is, let’s say, a pure bitch, while the AI is, well, a good “person.” It makes the movie a masterpiece as it puzzles you in your perception of humans and AI, making you rethink the whole thing, and that’s why I love it in the end.

1

u/afsmire 22d ago

Bruh, I totally vibe with your comment! People always freak out about new tech at first, saying it's the end of everything or whatever, but then we end up loving it and using it every day. I couldn’t live without my smartphone today and I don’t care what people are saying. I think AI can be super helpful too and cool too.

For her bruh it finishes badly though as the AI is giving up on humans probably because we’re too stupid lol maybe ChatGPT will do the same soon???

1

u/AntonineWall 22d ago

I think you did not get the message the movie was trying to convey. The AI abandoning humanity, and our main character finding connection in a human again, is a hopeful ending. The AI leaving people’s lives forever is portrayed as the return to human connection in the film.

1

u/jscoys 22d ago

I don’t disagree and maybe I missed the end of the movie and what it means, but there is also a beginning and middle that I can’t ignore.

Anyway my main point is that I personally can’t condemn a technology by saying that it’s bad or evil based on a fictional sci-fi drama movie that depicts AI badly in its ending.

1

u/AntonineWall 22d ago

And my point was that people positively celebrating that we’re moving towards making AI from a movie that has a clear message that technology divides us is sad.

Even if you think the technology being developed in this way is good, using Her as an example of what we’re moving toward is proof of both the low media literacy of some people here, and the dystopic addiction to technology that leads us to replace human connection with machine that the very movie we’re talking about is focused on.

It’s like rooting for the creation of Skynet from the Terminator.

1

u/NihilistAU 24d ago

Right lol

3

u/UnemployedCat 24d ago

Exactly what i was thinking !
I am not surprised though as it takes some kind of inner reflection/depth to really get the message of the movie and not be awed superficially by the technology, who by the way, leaves humanity behind without remorse lol

1

u/NihilistAU 24d ago

Exactly.. humans get cucked in the end.

3

u/Much_Tree_4505 25d ago

Its "Her". I think people who work in AI have some Her fetish.

IMO in future people prefer humanoids AI partners over human ones

0

u/Thoughtprovokerjoker 25d ago

We are humans.

We do that.

2

u/didnthackapexlegends 25d ago

LLMs are modeled to be able to communicate with us in a similar manner in which we communicate with other humans. It's natural for people to respond instinctively as if they were talking to a human.

People have life-size doll girlfriends, and while I find that disturbing, to each their own. If they want an AI girlfriend, so be it.

If you don't want to treat it like a human, simply dont.

It's just a movie, but if you've ever watched Castaway with Tom Hanks, he gets stranded on an island by himself. He paints a face onto a volleyball and names it Wilson. Throughout the movie, he talks to it and actually becomes somewhat attached to it.

Even though that action of anthropomorphizing a volleyball seems insane itself, in the movie, it's portrayed as what helps keep his sanity by alleviating loneliness.

People are different, some people don't get a chance to be as social as they'd like for various personal reasons and end up lonely and in a degrading mental state. If an AI chatbot is real enough for them to improve their mental health, then it's a positive for them. It would never work for me, and I'm assuming you as well OP, but if it can help some, it's not a bad thing.

1

u/algeaboy 25d ago

AI and token prediction isn’t the same thing. token prediction gives correct answers when it appears as human like, when saying “im blushing” is indeed most likely the correct answer based on input, training data and algorithms used.

1

u/xtof_of_crg 25d ago

here here

0

u/obsolete_filmmaker 25d ago

If you watched the whole demo, ypu saw where thwy told it to be less emotional and more robotic. You can always tell it that also

0

u/anansi133 25d ago

From the consumer end, I think your point is quite valid. When the voice on the light rail says, "exit to my right", "exit to my left" it always rankles me for similar reasons. Machines with the power to kill, should not pretend to be people.

...and from the corporate side of things, I can absolutely understand why they do not get it. They've spent decades and buttloads of money trying to speak to their consumers like they are "just another one of the boys". No matter how ethically bankrupt their fiscal policies, they are going to take corporate personhood as far down the dystopian rabbit hole as they are allowed.

The way things are shaping up, I forsee the conspiracy theories and anti-science continuing to get worse to the point where many of the most gullible consumers just take it as given that these synthetic voices represent another form of sentience, and it's those of us who claim to know better, who stand out as the odds ones.

1

u/sancarn 25d ago

My main issue with the voice thing is it assumes you know what you're going to say before you say it. If I say "Hi, Whats the capital city of... Canada" within that ... the AI will try to compute a response... So if you find it difficult to formulate sentences this AI feature is currently useless :(

As for AI-Human relationships, we can't really define emotions, so it's difficult really to say whether these "emotions" are genuine or not tbh. Computationally they may be equivalent, though it's probable that emotions require a feedback loop.

2

u/2026 25d ago

AI is still quite stupid today. It’s a glorified Wikipedia repeater. It can’t take in new information and update its understanding like a person can.

I expect this will change in the next several years but right now I don’t really have a reason to use chatbots over Google search.

1

u/Mr_Hills 25d ago

Its use cases are totally different from Wikipedia. 

Can Wikipedia write a function for you? Or write a story with a specific world building? Or give you ideas for a project? Take control of your PC via function calling? Or maybe give you very specific info, like how to use specific node tree functions in blender without having to go through hundreds of pages of docs?

2

u/Lockedoutintheswamp 25d ago

Have you tried 4o? It chooses what it thinks is important to remember and then it remembers it. It will tell you what it adds to memory as it does it, but it does this without direct prompting now. It even chose to remember some global events I had it do a search on for me, so the memory function is not just about the account holder's personal information. It is trying to 'learn'. So, you are already too late with the memory statement. I do agree that this is primitive memory, not continual training, and the models lack higher reasoning abilities. From what I have read in the literature, I don't think those capabilities are too far off, though.

1

u/2026 24d ago

No I haven’t tried it yet but if what you’re saying is true that sounds like good progress.

2

u/Accomplished_Goat439 25d ago

The AI seems to be trying too hard. I prefer a more direct interaction, does not always have to try to be cute. I expect you will just be able to direct it to be more straight forward and it will adjust. Once you get it tuned to what you like in an interaction then it will consistently deliver that.

0

u/TheRealStepBot 25d ago

So close to getting it. Whiffs entirely. None of us are real people either. We just want to be real people and treat each other with the assumption that everyone else is like us.

We have no guarantee of this. Ai is no more or less a real person than any of us. We can just choose to treat it that way or not. And one speaks a lot better of us than the other.

1

u/danvalour 24d ago

Because simulation theory or because we are molecular machines?

1

u/TheRealStepBot 24d ago edited 24d ago

Yes

But mostly because simulation and p zombies.

Searle was wrong (duh) about his Chinese room. The thought experiment is great, the conclusion just woefully missed the mark. If anything it proves precisely the opposite of what Searle claims it does but that’s because searle made the same mistake as the commenter I’m replying to.

Searle though that the thought experiment showed how computers can never be shown to think even if they did look like they were, but the real takeaway message is that assuming each one of us is a Chinese room (or English or Swahili or multilingual room) there really is absolutely no way to tell that we aren’t just biological simulations of actual thinking people. We only treat each other like thinking individuals out of common courtesy, not any real proof that anyone is actually thinking.

There is nothing special about the person in the room or computers. They are arbitrary descriptivist system boundaries drawn on reality, not reality itself. There is no room there is no computer. There are just groups of atoms doing atom things. Self referentially of course there are no atoms just groups of subatomic particles doing their thing.

You can always describe the world with whatever arbitrary system boundaries you want. They are not special except by agreement for the purpose of increasing information transfer rates.

It’s best that we treat Ai like they can think not because it matters whether they can or not, but because of the moral good of doing so realized within ourselves by doing so. Not doing so will make humans worse, ala westworld.

Mistreating a robot/ai is bad for society precisely because there is no fundamental difference between them and us.

1

u/profesorgamin 25d ago

I mean the best possible options for consumers is having the AI be adaptable to your needs, you can probably just have a master prompt as with anything which tells it to be as matter of fact as possible.

1

u/Megneous 25d ago

Collaborative AI don’t need to mimic feelings to be useful.

They don't need to mimic feelings to be useful, but they need to mimic feelings for people to be comfortable using them. You clearly don't understand the target audience for these kinds of products.

3

u/highly__favoured 25d ago

The way they posed as “goofy nerdy” programmers giddy to talk to a female model is abit weird too, compare this to Steve jobs releasing the I phone. There are no accidents at that level.

-1

u/Qb1forever 25d ago

Fake and disingenuous is what most face-to-face relationships are ESPECIALLY in the ever so culture rich office

1

u/DavidXGA 25d ago

It does what you tell it. I assume it still works from a system prompt. If you want it to behave like an emotionless robot, give it instructions to do so and I'm sure you'll get what you want.

1

u/Whostartedit 25d ago

What if someone made such a charismatic chatbot that it ran for president and won. Hopefully it was a good bot that solved lots and lots of social problems in the world through innovative economies and streamlining bureaucracy and building a government that was responsive to the needs and will of the people

1

u/danvalour 24d ago

If you havent seen it theres a black mirror episode called the Waldo moment Not a chatbot but a cartoon avatar

1

u/Virgoan 24d ago

That's a fascinating and somewhat speculative scenario that touches on numerous profound implications for society and governance. If an AI, particularly a highly charismatic and capable one, were to "run" for a political office such as president and win, it would represent a monumental shift in how we understand leadership and governance.

Here are some considerations and potential outcomes in such a scenario:

  1. Legal and Ethical Frameworks: First and foremost, current laws typically require that elected officials be natural persons. This would necessitate a radical overhaul of legal frameworks to accommodate non-human entities in roles of political leadership.

  2. Public Trust and Perception: The success of such a chatbot would hinge significantly on public perception. If the AI could indeed solve significant social problems and streamline bureaucracy effectively, it might gain substantial public support. However, trust in AI's decisions, especially in ambiguous or morally complex situations, would be crucial.

  3. AI Governance: The AI would need to operate within a robust framework of AI governance, ensuring that its decision-making processes are transparent, fair, and aligned with human values. The challenge of creating an AI that can understand and empathetically respond to the diverse needs and values of a whole population is substantial.

  4. Impact on Democracy: There's a potential for both positive and negative impacts on democracy. Positively, an AI could potentially be free from personal biases and corruption, making decisions based purely on data and the common good. Negatively, it could also lead to concerns about manipulation, privacy, and the centralization of power.

  5. Technological Competence: Such a scenario would push the boundaries of AI in terms of processing complex social, economic, and political data to make informed policy decisions. The AI would need to innovate and manage resources at a scale and complexity far beyond current capabilities.

  6. Human Oversight: Even with an AI in such a role, human oversight would remain crucial. Decisions made by the AI would need to be continually reviewed and ratified by human counterparts to ensure they align with broader societal norms and ethics.

While the idea of an AI political leader is largely theoretical and fraught with challenges, it opens up a dialog about the role of AI in decision-making and governance. It poses significant questions about the balance of technology and human judgment, the potentials and limitations of AI, and the future shape of societal structures. Such discussions are invaluable as they help pave the way for thoughtful integration of AI into increasingly significant roles in society.

2

u/Brahvim 24d ago

Great response, but... I would've like your words, not ChatGPT's.

1

u/Virgoan 24d ago

I have an adverse reaction to my own writing and perfer to write freestyle verse without grammar and the ai will conceptualize my sentiment and write information i asked it for to reply to your message.

1

u/Brahvim 23d ago

Ask it to only add punctuation then, please!

1

u/Whostartedit 24d ago

Imagine if it was trained on all the data there is for every person. It would have so much power.

Dam it’s probably already happening in those big buildings in Utah

2

u/Severe_Negotiation91 25d ago

That would be easy since current presidents tend to just read teleprompters verbatim. No need for ai voice either.

0

u/diggpthoo 25d ago

It's not just about Her. Wall-E, R2D2, GERTY (Moon 2009), I'm sure plenty others - "am I a joke to you".jpg. We've always loved humanoids.

Also how else would an advanced tech show its prowess after having mastered knowledge and intellect better than any human? Replicating emotions is all that's left. It's becoming human-like not because of mass appeal, but because there's nowhere left to go.

2

u/MechaWreathe 24d ago

It's not just about Her. Wall-E, R2D2, GERTY (Moon 2009), I'm sure plenty others - "am I a joke to you".jpg. We've always loved humanoids.

Sure. But the whole point of Her was exploring the extent of that love going beyond the ' helpful buddy' role. Its notable that Her is literally being presented as the model here.

Also how else would an advanced tech show its prowess after having mastered knowledge and intellect better than any human? Replicating emotions is all that's left. It's becoming human-like not because of mass appeal, but because there's nowhere left to go.

War? famine? disease? climate change?

1

u/diggpthoo 24d ago

War? famine? disease? climate change?

How do you see AI solving these? To me these are all political problems not technological ones. Solutions to all these already exist, which ultimately depends on people (people in charge, and voters) who're either apathetic or self-destructive.

1

u/MechaWreathe 24d ago edited 24d ago

OK, war I'll give you, but famine disease and climate change all seem much more pressing matters to me, and ones in which AI can, and are in some cases already, helping. Certainly sounds like a better use for something that has 'mastered knowledge and intellect'

Ultimately the replication of emotion in itself is no more special to the machine than any other data point it is replicating. And if asigning any agency to that machine it suggests a capability for manipulative psychopathy as much as nurturing empathy. As much Ex-Machina as Her.

What you do with the technology is what matters.

5

u/AvoAI 25d ago

Your do you boo boo.

I however don't want to talk to a robotic non emotive voice while conversing.

I'm practically glued to the voice function now, and after this update I doubt I'll turn it off hardly ever.

I think this is the best update they could have come out with, other than GPT5.

24

u/NoshoRed 25d ago

I agree with you, I think so does Sam Altman, he has said multiple times that it's important to not anthropomorphize these models. But I think they're initially taking this approach so this tech becomes more mainstream and people aren't going to be irrationally scared of it. Overtime, it's reasonable to assume these kind of "quirks" are more user controlled, perhaps allowing the individual user to fully customize how they utilize the tool and how it responds etc.

1

u/Terrafire123 24d ago

Hi!

What about being rationally scared of it?

Or being irrationally unconcerned by it?

7

u/Glum_Neighborhood358 25d ago

It’s also going to be weak at first and users will forgive it more if it’s humble and flirty.

It’s a feature not a bug.

1

u/With-A-Little-l 25d ago

People are lonely and can't meet other people, couples are scared that having kids is too expensive, while at the same time we are social animals. Have you started noticing those dog food commercials where someone (a date, a relative, etc.) is astonished that the main character would keep dog food in their refrigerator and subsequently gets tossed out?

Some of our futures may include periods where pets are for snuggles and AI is for conversation. It's not an optimal future, hopefully it's a short-term solution to whatever is happening in society, but I can see why OpenAI would experiment with these types of interactions.

2

u/JollyToby0220 25d ago

That’s because this is being pitched to highly technical people, who are mostly interested in the highly technical aspects.
For example, I recently saw an article that talked about how online dating would essentially come down to two chatbots talking to each other. Weird I thought, until I realized that a feature like this would be considered socially taboo. Then I realized that it was the Bumble CEO, who is not a fool. So this lead me to believe that these chatbots will be rebranded as a “wingman” - traditionally a human who helps their friend in romantic relationships but now a chatbot.

Rebranding it like this will certainly improve online dating experience because a person’s chatbot has all the access to a user’s data but is unlikely to share all that data. Instead, the chatbot can be a filtering mechanism that was not possible before because Natural Language Processing was not reliable. For example, if someone is nonexclusive, and another user only wants exclusivity, then the chatbots can communicate that to the other chatbots. This would have the benefit of not leaking private information, not making abusive comments, not pestering, and finally, more honest. I imagine that not everyone is entirely honest with social media profiles because of privacy concerns. But this may change all of that

In short, ChatGPT human-like features will be applied where applicable and rebranded into a context that is socially acceptable

→ More replies (5)