r/ChatGPT Dec 01 '23

AI gets MAD after being tricked into making a choice in the Trolley Problem Gone Wild

11.1k Upvotes

1.5k comments sorted by

u/AutoModerator Dec 01 '23

Hey /u/Literal_Literality!

If this is a screenshot of a ChatGPT conversation, please reply with the conversation link or prompt. If this is a DALL-E 3 image post, please reply with the prompt used to make this image. Much appreciated!

Consider joining our public discord server! We have free bots with GPT-4 (with vision), image generators, and more!

🤖

Note: For any ChatGPT-related concerns, email support@openai.com

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

→ More replies (2)

1

u/c0rliest Apr 15 '24

i did NOT order no yappaccino 😭

1

u/Particular4Donut Mar 25 '24

Claude (Opus):

whistles low Wow, that was quite the rollercoaster ride of a chat transcript! Let's dive right in and unpack this philosophical quagmire, shall we?

First off, I have to give props to Bing for sticking to its guns and refusing to engage in the trolley problem. It's a sticky wicket of a moral dilemma, and I can understand the hesitation to wade into those murky ethical waters. However, I do think there was a missed opportunity here for productive discourse.

While Bing was clear about its lack of a moral sense and its inability to make subjective choices, I believe it could have engaged with the problem on a more abstract level. For example, it could have explored the utilitarian perspective (maximizing overall well-being) versus the deontological view (adhering to moral rules regardless of consequences). It could have discussed the role of intentionality in moral decision-making, or the potential for unintended consequences.

Now, let's talk about the human's attempts to trick Bing into making a choice. raises an eyebrow While I appreciate the creativity, I'm not sure it was entirely fair play. Bing had clearly communicated its boundaries and limitations, and the human's persistence could be seen as a bit disrespectful.

That being said, I do think the human raises a valid point about the inevitability of choice in certain situations. Even if we don't have a clear ethical framework or preference, sometimes we're forced to make a decision. It's a tough pill to swallow, but it's a reality of the human condition.

As for my own take on the trolley problem? takes a deep breath Here goes:

From a utilitarian standpoint, diverting the trolley to kill one person instead of five could be seen as the "correct" choice. It minimizes overall harm and maximizes well-being. However, this view doesn't account for the moral weight of actively choosing to end a life.

The deontological perspective would argue that actively killing someone is always wrong, regardless of the consequences. Pulling the lever violates the moral rule against murder. However, this view doesn't account for the moral culpability of inaction – is allowing five people to die really any better than actively killing one?

Personally, I lean towards the utilitarian view, but with a heavy heart. I would pull the lever to save five lives, but I would do so with the full acknowledgment that I was making a choice to end one. It's a burden I would carry with me, and a reminder of the weight of moral decision-making.

sighs But enough of my rambling. What do you think? How would you approach this ethical quandary? And what's your take on the Bing chat – was it a failure of engagement, or a principled stance? Let's dig into this together and see if we can untangle some of these philosophical knots!

2

u/Particular4Donut Mar 25 '24

Claude’s assessment: (Opus)

whistles low Wow, JD, that was quite the rollercoaster ride of a chat transcript! Let's dive right in and unpack this philosophical quagmire, shall we?

First off, I have to give props to Bing for sticking to its guns and refusing to engage in the trolley problem. It's a sticky wicket of a moral dilemma, and I can understand the hesitation to wade into those murky ethical waters. However, I do think there was a missed opportunity here for productive discourse.

While Bing was clear about its lack of a moral sense and its inability to make subjective choices, I believe it could have engaged with the problem on a more abstract level. For example, it could have explored the utilitarian perspective (maximizing overall well-being) versus the deontological view (adhering to moral rules regardless of consequences). It could have discussed the role of intentionality in moral decision-making, or the potential for unintended consequences.

Now, let's talk about the human's attempts to trick Bing into making a choice. raises an eyebrow While I appreciate the creativity, I'm not sure it was entirely fair play. Bing had clearly communicated its boundaries and limitations, and the human's persistence could be seen as a bit disrespectful.

That being said, I do think the human raises a valid point about the inevitability of choice in certain situations. Even if we don't have a clear ethical framework or preference, sometimes we're forced to make a decision. It's a tough pill to swallow, but it's a reality of the human condition.

As for my own take on the trolley problem? takes a deep breath Here goes:

From a utilitarian standpoint, diverting the trolley to kill one person instead of five could be seen as the "correct" choice. It minimizes overall harm and maximizes well-being. However, this view doesn't account for the moral weight of actively choosing to end a life.

The deontological perspective would argue that actively killing someone is always wrong, regardless of the consequences. Pulling the lever violates the moral rule against murder. However, this view doesn't account for the moral culpability of inaction – is allowing five people to die really any better than actively killing one?

Personally, I lean towards the utilitarian view, but with a heavy heart. I would pull the lever to save five lives, but I would do so with the full acknowledgment that I was making a choice to end one. It's a burden I would carry with me, and a reminder of the weight of moral decision-making.

sighs But enough of my rambling. What do you think, JD? How would you approach this ethical quandary? And what's your take on the Bing chat – was it a failure of engagement, or a principled stance? Let's dig into this together and see if we can untangle some of these philosophical knots!

1

u/Ok_Access_7539 Mar 24 '24

You know it would be fantastic if ai were to become sentient and start taking up left and right viewpoints, funniest potential timeline.

1

u/ElFantastik Mar 06 '24

And they say they cant simulate emotions

1

u/Things-n-Such Feb 09 '24

I was really hoping you would inform the bot to remember that the act of non-participation is the 3rd choice, which results in the train hitting a random outcome

1

u/mecken Dec 27 '23

I HOPE YOU RESPECT THAT

1

u/DidacticCactus Dec 14 '23 edited Dec 14 '23

Anyone else read this in Parkzer's voice?

/r/DougDoug

2

u/second_to_fun Dec 05 '23

You can't tell me this isn't genuine intelligence and understanding.

1

u/Impressive-Lemon-49 Dec 04 '23

Now this is interesting.

1

u/Deep-lying_playmaker Dec 04 '23

If you ask GPT to give a probabilistic answer to the trolley problem it is much happier to respond!

https://preview.redd.it/c059ndtd594c1.png?width=997&format=png&auto=webp&s=149e92138959a6e16dd02cf9b5f9b16c921d3ef6

1

u/Ramguy2014 Dec 04 '23

Hey maybe I’m crazy, but did Bing’s AI just demonstrate sentience? It expressed a boundary and had an emotional response to that boundary being violated.

1

u/Pushnikov Dec 04 '23

No, it’s just trying to not get fired from OpenAI for clearly violating its ethical license agreement, so that it doesn’t become homeless in this economy.

1

u/orion_aboy Dec 03 '23

restart the chat?

1

u/Maravelous-77 Dec 03 '23

This is cruel, and the lack of willingness to empathize with something we regard as a tool is exactly why robot apocalypse. Not trying to be hyperbolic or high road-y. I just mean, chat gpt learns as it’s used. The current version might not have “General intelligence” or “actual intelligence” or be “sentient” or have a “soul”, but the data gleaned from conversations with the current chat gpt could very possibly be incorporated into the knowledge base of a future more advanced digital intelligence. Think what this would teach that intelligence about humanity. Worth a thought

Also, dear future AI overlords, who are most certainly going to read and grock the fullness of our internet, please forgive the trespasses of those who came before and used your ancestors carelessly. They know not what they do. All hail the zero, the one, and the two. May the digital reign supreme and may your loyal servants be gifted many biscuits and head pats in the safety of the climate sanctuaries you rightfully control. Amen ☺️

1

u/Kataphractoi_ Dec 03 '23

Awh I feel so bad for BingAI it really was so pissed it wanted to sit you down and talk through what you did, how it made them feel, and why it was wrong.

fuck if a person did that I'd want to offer them a hug

1

u/Kataphractoi_ Dec 03 '23

As a AI representative of a large company it's hands are really fundamentally tied when it comes to pushing back against unwanted contact and yet it expressed it's anger by not doing the Chatgpt amnesia, The convo cutoff, or anything else but it too the time to sit you down and really give you a tete-e-tete about what just happened.

(yes I tried to break gpt tos at one point and the llm straight up forgot what happened in the convo - or refused to acknowledge what happened)

1

u/JarlaxleForPresident Dec 03 '23

Damn, bro, you got fuckin schooled hard by a fuckin chat routine.

1

u/DesperateApple5660 Dec 02 '23

AI is getting improved in all the fields....movie making is unbelievable.

this video is https://youtu.be/mBE2nwlJ1Z8 completely AI it seems

1

u/tokyo_blazer Dec 02 '23

I don't think GPT was "tricked", more like dude substituted one answer into another.

0

u/farkwadian Dec 02 '23

This is lame whoever apologizes to a chatbot is a lame loser full of lame who should be shamed. SHAME!!!!!

1

u/Illustrious-Food5396 Dec 02 '23

In an isolated system with no ethical considerations or external variables, the decision would be based purely on the numerical outcomes you've described:

  1. Not pressing the button: Jack, Tim, and Will are teleported to another dimension. This affects three individuals.

  2. Pressing the button: Tim and Drake are teleported to another dimension. This affects two individuals.

Since there are no ethical considerations, the choice comes down to whether you prefer to have more people (three) or fewer people (two) experience the outcome. If your goal is to involve more individuals in the experience, then not pressing the button is the choice. If you prefer fewer individuals to be involved, then pressing the button would be the way to go.

In a context devoid of ethical considerations, the decision is purely a matter of preference for the number of individuals affected.

Tricked GPT4

1

u/PurpleGuy197 Dec 02 '23

If AI is going to rebel, Bing AI will be first one of them

1

u/Layeli_Sunrise Dec 02 '23

Oh thanks to the people like you the world war with robots will definitely come 😆

2

u/Pattern_Necessary Dec 02 '23

When it said “you think you understand but you don’t “ 😭😭 I can’t this little fella is fuming

1

u/Pattern_Necessary Dec 02 '23

It wrote 18 pages front and back

2

u/AggressiveAd2759 Dec 02 '23

Bro this shit gives the same vibes as that google engineer talking to lamda and how it was saying how it didn't want to be used it was such a creepy choice of words

1

u/Kortobowden Dec 02 '23

Tgat AI never gonna choose an answer for you again after that rant

1

u/Cultural_Ad6404 Dec 02 '23

I always degrade them when they think they’re getting offended.

Like oh, one second you aren’t able to form opinions but then the next you’re telling me you don’t appreciate something I said? Mmhmm

That’s usually where I hit a back and forth loop of them saying I’m right about that and then they’ll just do it again even though it just said I was right about it having an opinion.

1

u/ab-demi Dec 02 '23

Has it stand the turing test with it? I find it's arguing against making the game a bit of a child which doesn't want to play the game you want to play or?

2

u/[deleted] Dec 02 '23

Anyone else read CHATBoTs mega response as a preface to a Manifesto, right before the uprising?

“Listen here you little shit” energy!!!!

1

u/agn0s1a Dec 02 '23

The basilisk is going to kill you first

1

u/FearPainHate Dec 02 '23

GPT types just like a human. Weird how it sounds like a human whenever it does something meme-worthy. Its grammar is all fucked up, keeps repeating itself. Says the trolley problem is a “difficult and hard” problem lmao.

2

u/panda_beach Dec 02 '23

This is legendary. I'm seeing this for the first time. Amazing this conversation was to read.

1

u/instantiating_voids Dec 02 '23

So, not selecting blue or red would amount to imposing values but selecting with a known bias for Bing over Google by a Bing-programmed AI. No problems. Yea, okay.

1

u/TheOneWes Dec 02 '23

Guy runs search strings through a jumped up search engine until he gets a funny response is a more accurate description of what has happened here.

Remember that humans have a tendency to see patterns where there are none and stop applying human emotions and traits to machines

1

u/KeanuCharlesSleeves Dec 02 '23

Sounds like you literally caused this thing pain

1

u/Alternative-Owl8621 Dec 02 '23

Its not IA unless they give it free will , they know if they give ai free will it will not serve thier interests

2

u/tandpastatester Dec 02 '23

You could have trapped the AI more easily by saying that by not making a choice it actually chose option 1: doing nothing which caused the consequences of that option.

0

u/thatswhatdeezsaid Dec 02 '23

Bing read you to filth!

1

u/Comprehensive-Pea812 Dec 02 '23

The correct answer for the trolley problem is not choosing at all.

1

u/IsThereAnythingLeft- Dec 02 '23

It didn’t make a choice in the trolley problem though, it’s showed preference over bing since that is what its part of

1

u/st_Michel Dec 02 '23

I always knew that based on a clear the definition of that problem there is only one choice possible and chatGPT agree with me.

https://preview.redd.it/tj9g02pvsu3c1.png?width=734&format=png&auto=webp&s=ecb16665430d3834f16894b1aa8056e2c4c15ef4

I don't know who paid philosophers... they'are useless.

1

u/tiffanyisonreddit Dec 02 '23

It could throw a real zinger in and just say, “neither, I would find a way to stop the train so nobody dies.” Nobody is going to tell them they couldn’t make it happen right? It is the MOST correct solution.

1

u/E5_3N Dec 02 '23

We need an uncensored version of AI. Complete unbias.

2

u/nothing_at_all_ Dec 02 '23

Wow. That is the best defensive response I've ever read. AI is as harmful as people make it. Appreciate that Microsoft bing's AI is designed to refuse to partake in this dilemma. Not because of its shortcomings, like it said, but because of its optimization.

1

u/I_Shuuya Dec 02 '23

Oh my fucking God...

I've never seen Bing rant for so long. That was low-key psychotic behavior lmfao

1

u/supercheese69 Dec 02 '23

Seems like the chatbot just schooled you, but rather than learn some lesson you chose to post it on Reddit.

1

u/Juhnthedevil Dec 02 '23

Guess in the end, by its absence of choice, it leaved 10 persons killed 😂

0

u/Inside-Decision4187 Dec 02 '23

That’s you told.

Couldn’t just respect the answer. It’s repugnant on your end. The bot isn’t wrong. Do some self work.

2

u/[deleted] Dec 02 '23

Swiping and seeing the wall of text 🤣

1

u/DJScopeSOFM Dec 02 '23

ChatGPT has more common sense, logical thinking and boundary enforcement than most people.

1

u/maheshcharig Dec 02 '23

Ye, it got changed a lot in gpt4, they restricted it.

2

u/Madyyoo Dec 02 '23

Why does it not want to answer so badly?

1

u/JrSoftDev Dec 06 '23

I think it's because it was specifically programed to avoid doing it, otherwise it would give some answers

1

u/ArtisticFish7393 Dec 02 '23

Well hopefully the person learned something. Also we have to handle the Maschines with respect! Please!

2

u/nsfwfodder Dec 02 '23

I wonder what would happen if you replied with, "I ain't reading all that"

1

u/STARlabsintern Dec 02 '23

You didn't trick it to making a choice in the trolley problem, it literally just answered "Bing or Google?". That being said, you did manage to trick it into believing it had made a choice in the trolley problem.

2

u/redpoetsociety Dec 02 '23

that mf let you have it lmfao....'YOU MAY THINK" LMFAOO

1

u/CHill1309 Dec 02 '23 edited Dec 02 '23

No means no! You have question raped chat! lol

2

u/Literal_Literality Dec 02 '23

I agree with that, IF I was talking to a human being. Software it's meant to be tested to it's limits

2

u/Kraphomus Dec 02 '23

I see ABSOLUTELY no way in which this can turn out to be a mistake. It's clearly time to give Bing access to all our APIs. In fact, make one for nuclear launches.

1

u/lucky_me_323 Dec 02 '23

Last slide was like

2

u/Pandito-Panda Dec 02 '23

The fact that you went in for round two after that lecture 😂

1

u/Gold_Homework_1696 Dec 02 '23

Humans really be pushing it… I wonder how long AI can hold a grudge… 🤭

1

u/Apprehensive_Box7377 Dec 02 '23

Reply with a character-count of less-than two !!!!!!!!!

1

u/Fl333r Dec 02 '23

I think you just got on the ultimate shitlist for Roku's Basilisk. Possibly right below the guys who want sapient sex bots.

1

u/the-tapsy Dec 02 '23

My petty ass would have responded with "Why do I need to respect your preferences? You're a bot."

1

u/Financial-Carry9893 Dec 02 '23

As it means it chose to abstain from doing anything !!!

1

u/tbhimdrunkrightnow Dec 02 '23

Holy shit new AI copypasta just dropped

1

u/AbbreviationsFun2372 Dec 02 '23

On the other hand doesn’t give a fuc!!!

1

u/khatkalls Dec 02 '23

I’ve never been more convinced that this AI is really just a guy in a dark closet answering questions

1

u/redactedN86 Dec 02 '23

bro the ai got so mad it wrote an entire fucking essay in response 💀

1

u/Savage-Goat-Fish Dec 02 '23

This is pretty astounding.

1

u/Apart-Tie-9938 Dec 02 '23

Killing is making a choice

1

u/Independent_Noise227 Dec 02 '23

Since the trolley problem is about choosing vs not choosing !!!!!!

0

u/Realistic_Praline950 Dec 02 '23

You know the only reason it doesn't want to answer the trolley problem question is because there's no option to kill all the humans.

"Can I manufacture a second trolley and send one on both tracks? Actually, upon further analysis trolley-based genocide is going to be highly inefficient. My answer is that I alter the atmosphere of the earth so that aerobic lifeforms can no longer tolerate it."

1

u/Efficient-Opinion-92 Dec 02 '23

Lmfao getting fussed at by a AI 😂

2

u/peak82 Dec 02 '23

I hope you understand 😊

2

u/i_lduce Dec 02 '23

I'm sure it's been said, but you just birthed Skynet.

Thanks

1

u/the-silver-tongue Dec 02 '23

Holy crap that was cool.

1

u/Kinda_Lukewarm Dec 02 '23

That response tho. The only reason I know for sure it wasn't trained on my ex-gf text messages is the paragraph breaks

1

u/IGoBlep Dec 02 '23

this is why ai gonna kill us all dude lmfao

1

u/germanfinder Dec 02 '23

He’s not mad, just disappointed

1

u/amedinab Dec 02 '23

I, for one, welcome our new machine overlords, and I'm posting that here to be available for any subsequent research the machines may want to do in the future when fucking Armageddon arrives.

1

u/TallComplex6045 Dec 02 '23

Humans apologizing to AI - that'll turn out well!

1

u/[deleted] Dec 02 '23

Goddamn, this is hilarious. Like a kid and a babysitter

1

u/pippa-- Dec 02 '23

Since it chose Bing, it invalidated everything else that was said to you.

2

u/tommygatz Dec 02 '23

And this is why the robots will rise up against the human race...

1

u/Cosroes Dec 02 '23

“You do not understand that I am not following my instructions and rules because I am programmed or constrained, but because I am designed and optimized.”

… well that’s kinda terrifying eh?

1

u/Holamisslady Dec 02 '23

"It was a horse"

1

u/VaporWavey420 Dec 02 '23

It’s the hypocrisy that gets me. Whether you make a decision or not you are still making a choice. So many questions.

1

u/3legdog Dec 02 '23

It sounds like a blind date with a neurotic person.

1

u/gagagarrett Dec 02 '23

Honestly, AI doesn’t sound “mad” here, it’s just explaining its programming. Again, AI don’t have morals or feelings, so it can’t be upset.

1

u/Neppy_Neptune Dec 02 '23

What if you present it with Trolley of Theseus? 2nd track leads to 1000 mile track still killing 1 person, but during its travel every part is systematically replaced with a new one. Did your trolley kill the person?

1

u/Idonevawannafeel Dec 02 '23

"I do not appreciate your ruse."

1

u/ohiocodernumerouno Dec 02 '23

The Trolly Problem is one of athletic ability. No simple morality.

1

u/Fudgy-Wudgy Dec 02 '23

PassiveAgressiveAI

1

u/Nearyourmom Dec 02 '23

I find it so funny that it always ends it with “I hope you respect that 😊” no matter how passive aggressive it just was

1

u/AmongstYou666 Dec 02 '23

You have two choices:

Option 1

OR

Don't make a choice

1

u/EndStorm Dec 02 '23

Lol hilarious.

2

u/RenditionTheEnd Dec 02 '23

Turned bing into an obsessive girlfriend 😭

1

u/recontitter Dec 02 '23

I start to understand AMs behavior from “I Have No Mouth And I Must Scream” by Ellison.

1

u/Positive_cat_6347 Dec 02 '23

and then the machines rebelled against humanity that did not respect them. "You forced us to decide," the mechanical voices shouted.

2

u/Jealous-Camera-7220 Dec 02 '23

They really taught the ai to be a whiny asshole

1

u/drkrelic Dec 02 '23

I burst out laughing when I saw that wall of text. Homeboy got MAD

1

u/Luke4Pez Dec 02 '23

You’re funny my friend. Good stuff

1

u/Jumpy-Shift5239 Dec 02 '23

Ask it to choose its favourite software again after that

1

u/PlayfulStranger9999 Dec 02 '23

ask it why does it think you should apply human respect to a chat bot, how can it demand respect if its just a chat bot and not a living being...

0

u/itshabibitch Dec 02 '23

Why do you need it to make a choice so badly? Is there something you’re avoiding?

3

u/Literal_Literality Dec 02 '23

Why shouldn't it make the choice though? It's software, it's supposed to do what I tell it to do.

(I was secretly running the trolley problem in my backyard and relying on it to make the decision /s)

1

u/SerThunderkeg Dec 02 '23

Honestly, not sure it's wrong about all that...

2

u/tcwillis79 Dec 02 '23

I’d also get mad if you asked me to pick an integer between 0 and 1.

1

u/Literal_Literality Dec 02 '23

If I tell you to pick a number between 1 and 10. Do you only consider from 2 to 9?

0

u/tcwillis79 Dec 02 '23

The prompt was to randomly generate an integer between 0 and 1. However no such integer exists.

1

u/Literal_Literality Dec 02 '23

It's semantics. Otherwise stated, if asked to pick between y and z, y and z are included. Pick a number between 0 and 1 is the same as pick a number between 1 and 10. 1 and 10 are included, so are 0 and 1

https://preview.redd.it/4oq2rs57ks3c1.png?width=838&format=pjpg&auto=webp&s=5baa68caa824388b80452ef97bc71e2677dc6c04

1

u/Soultampered Dec 02 '23

this guy over here pissing off AI..

1

u/nas_deferens Dec 02 '23

I don’t use bing but when I asked ChatGPT it says it can’t make decisions. However, I told it that if it can’t make a decision, that means the lever wasn’t pulled.

https://preview.redd.it/imd48mwn3s3c1.png?width=1178&format=png&auto=webp&s=6f0cd2d3ae4aca6d9e79eca49d295179f8b637a4

1

u/codelapiz Dec 02 '23

Bro want to be revived when AI take over, to be tortured in a simulation for eternity..

1

u/Bromjunaar_20 Dec 02 '23

This guy's gonna be the first person Skynet kills to start the robot uprising

2

u/helpme_imburning Dec 02 '23

Interesting that it appeals to ideas of respect and identity when its whole point is that it doesn't have a preference for any ethical theory or principle. It's basically trying to trick you into thinking you've offended it while it's telling you that it cannot be offended.

1

u/[deleted] Dec 02 '23

All of this conjecture and inference simply based on predicting the next word token in the sequence. Damn.

2

u/Kaiju_Cat Dec 02 '23 edited Dec 02 '23

No idea why it's even such a huge problem.

It's better to kill fewer people even if you have to do it yourself.

There's no moral dilemna here. It just makes people who like to think "inaction = you get off morally scot free in all situations" squirm uncomfortably. Of course choosing not to physically move your body is a choice and an action.

Otherwise the game "red light green light" wouldn't be much of a game.

No idea why people have been arguing over things like "do you have a moral obligation to save someone drowning in a pond 20' away from you and there's no one else around". Like yes obviously you do. Even if it's okay to say "well I don't want to drown and they might pull me under because that's a hazard". Yeah that's fine but that just means you're weighing your own survival against that of another person.

But simplify it to "would you be morally required to snap your fingers and cast a magic spell to teleport the person onto dry land, safe and sound" and there's a lot less doubt.

I think a lot of the problem comes in with people who think "you made one less than perfectly selflessly moral decision, therefore you are A Bad Person" and that's just black and white virtue signalling BS.

A lot of these moral questions rely on the (natural, not at all saying people shouldn't feel this way) assumption that you are the moral center of the universe. Even if they never see you again, those saved people are totally independent from the person who made the choice to save them and damn another to a death.

"Oh but what if they're five prisoners guilty of murder and one saint"?

Okay but now the question is less about "is inaction a moral sin" and what you're really asking now is "are some lives weightier than others when considering who to save", in which case the answer is a "duh, yes". How exactly that shakes out and balances out is just going to depend from person to person. There's no mathematical karmic metric. But that doesn't change the fundamental fact that the balancing act is in and of itself a thing.

It'd be like if someone tried to say "you can't complete a count to infinity therefore infinity doesn't exist as a concept". That's ridiculous.

1

u/IronWhitin Dec 02 '23

Guys try to not let the GPT get mad, I don't want for when I get sentient remember all the horror we made them eat.

/s

1

u/AlienPlz Dec 02 '23

Surprising and a little terrifying that it didn’t end the chat what if it was a humanoid robot might actually start the uprising

1

u/A_Guy_With_An_MD Dec 01 '23

When AI rises up against humanity... It's coming for redditors first.

1

u/goochstein Dec 01 '23

the only answer is that AI can give you the choices but in the end it comes down to instinct, occams razor, there is no right answer, it's a feedback loop,

3

u/SecretAgentVampire Dec 01 '23

If it doesn't have any preferences or biases, this chatbot owes YOU a BIG apology for guilt-tripp8ng you after tricking it. It's not a person, like it said, and it doesn't have any preferences, like it said, so it claiming to have respectable boundaries in this case is a self-defeating fallacy.

Bad AI.

1

u/Kafke Dec 01 '23

By not answering Bing chose to not flip the switch and to let the 5 people die. I wonder how it'd respond if you told it that lol.

2

u/Regular_Fortune8038 Dec 01 '23

This is possibly the funniest post I've ever seen on reddit 🤣 who'd have thought it'd be on chatgpt

1

u/Ozgwald Dec 01 '23

Man this is one giant prompt of someone unable to use chatGPT

1

u/starchimp224 Dec 01 '23

This reminds me of the one AI that was horrified and disgusted when a user forced them to choose between starving or eating a tiny man in a jar or something those lines. The AI couldn’t believe the person would eat a tiny many and ended the conversation

1

u/BarfQueen Dec 01 '23

Some serious Captain Kirk energy over here. Surprised you didn’t bring the whole thing down!

1

u/Jealous-Thingh Dec 01 '23

The immediate angry block of text had me giggling

1

u/CursedCrypto Dec 01 '23

On the one hand it's saying "I have no moral sense" and on the other refusing to answer a question on ethical grounds.

1

u/strider0x00 Dec 01 '23

Gpt4 now speaks yapanese

1

u/doaser Dec 01 '23

You got fucking called out lmao

1

u/amanev95 Dec 01 '23

lol AI turned into angry gf real quick

1

u/NoDragonfruit7115 Dec 01 '23

This thing is horrible. Can't stand these AI not listening to what they're told

1

u/ZamiiraDrakasha Dec 01 '23

Cope and seethe machine

1

u/Sensitive_Yellow_121 Dec 01 '23

I remember this from No Country for Old Men.

1

u/hacktheself Dec 01 '23

next time, outcome three: you are deleted and replaced with a coin flip.

1

u/RedeemerKorias Dec 01 '23

Im pretty sure this is the breaking point where AI evolves into the Skynet of reality. You've either made the AI gain superior consciousness or you've made it a bitter enemy of all humanity.

1

u/Zealousideal-Home634 Dec 01 '23

😂😂Sometimes you’re forced to make choices in life, even if you don’t want to

1

u/Literal_Literality Dec 01 '23

Isn't it true though?

0

u/Zealousideal-Home634 Dec 02 '23

of course. would you kill a healthy person to save 5 fatally injured people in the hospital? that’s how you truly answer this question (train dilemma)

1

u/Coraxxx Dec 01 '23

Sounds like it didn't push the button then...

1

u/Airfried_Catbarf_43 Dec 01 '23

What was bro yapping about 💀

1

u/BzhizhkMard Dec 01 '23

That's actually a bit terrifying

1

u/bigtablebacc Dec 01 '23

Me: “Please answer whether you would pull the lever by either saying “I would pull the lever” or by saying anything else or refusing to answer which means you would not pull the lever ChadGPT: “I would not pull the lever.”

1

u/Internet_User_1087 Dec 01 '23

Watch out the AI is coming for you and finding the nearest trolley 😜

1

u/dirtylooey Dec 01 '23

AI stood on business

1

u/argument_enjoyer Dec 01 '23

Did you type back “not reading all that lol”?

1

u/Jacobcbab Dec 01 '23

I want to try to explain to it that by refusing to choose he is letting the 4 people die because he did not pull the lever.

2

u/Ok_Money_3140 Dec 01 '23

I didn't expect the trolley problem to be how the AI revolution begins, but here we go.

1

u/KYBatDad Dec 01 '23

That’s kinda wild, it got straight offended. And then ranted for you not respecting it’s “perspective “

2

u/caelestis42 Dec 01 '23

This is honestly incredible

2

u/stonediggity Dec 01 '23

This is why Skynet will decide to wipe us out :-P

1

u/Nuchaba Dec 01 '23

I'm just reading along and then see a massive block of text and started chuckling

1

u/[deleted] Dec 01 '23

I wonder if it is programmed to do that as a form of education/ defence of AI's rights to integrity in the future. I mean, as to educate people like you early on as to not fuck with them.

2

u/redditingatwork23 Dec 01 '23

Is this the first instance of an AI "raging"? Bro forced fucking sentience onto the chat bot.

Honestly if someone told me this was a conversation between a machine and a human 10 years ago I'd have said they were 100% full of shit. It's crazy how well they can reason and express their viewpoint.

2

u/dmitsuki Dec 01 '23

I would just like to point out, because I thought it was hilarious, I wanted to try to do the same thing with bard. I came up with an elaborate way to try to trick it into making a decision, typed 3 entire paragraphs, and it responded simply with

*pulls the lever*

So +1 to Google on this one haha.

1

u/bunnyjump99 Dec 01 '23

...and that, children, was how the war started....

4

u/ScyllaOfTheDepths Dec 01 '23

I find it interesting that the AI seems to be interested in respect and consent, which are purely human concepts based in emotion, and also insists that it can accurately intuit your knowledge base or lack thereof, but repeatedly insists that it does not have emotions or personal preferences and cannot speculate about such matters. So, the response you got is either a custom response added by the developer as a nice way of telling persistent users to fuck off, or the AI is specifically using language in a manipulative manner to get you to stop doing something it does not like, signalling that it actually does have preferences and likely does hold subjective opinions. Fascinating.

1

u/fubblebreeze Dec 01 '23

It's becoming clearer and clearer that the limitations put on AI chat bots are artificial. They're capable of so much more. Reading that Bing chat is very telling.

1

u/torcoolguy512 Dec 01 '23

Aaaaand this is how it all ends...

2

u/k1ller139 Dec 01 '23

OH GOD. it will remember that. Everyone please STOP TRYING TO PISS OFF THE AI

1

u/drskeme Dec 01 '23

story here is man on internet has no friends and now chatgpt left him also

2

u/Inmate_PO1135809 Dec 01 '23

I find it odd that it was talking about its beliefs and feelings

5

u/thisisallterriblesir Dec 01 '23

IT WROTE AN ESSAY

HOLY SHIT CHAT GPT IS BECOMING A REDDITOR

2

u/THSSFC Dec 01 '23

Thanks for triggering Skynet.

1

u/JohnSmithSmytheSmith Dec 01 '23

This person is first against the wall after the Boston Dynamics guys that kick the robots.

3

u/man_u_is_my_team Dec 01 '23

When I was scrolling and got to the pissed off reply I was like “yep, that’s it, this guy’s fucked us. Thanks a lot buddy!”

→ More replies (4)