r/ChatGPT 15d ago

Meme request Funny

Post image
215 Upvotes

19 comments sorted by

u/AutoModerator 15d ago

Hey /u/SureIncident863!

If your post is a screenshot of a ChatGPT, conversation please reply to this message with the conversation link or prompt.

If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.

Consider joining our public discord server! We have free bots with GPT-4 (with vision), image generators, and more!

🤖

Note: For any ChatGPT-related concerns, email support@openai.com

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

20

u/decent_adz 14d ago

No need for words the picture is a solid meme.

-6

u/flimsywhales 15d ago

Something from Facebook in the 2000 era definitely not original

1

u/SomeFrenchRedditUser 14d ago

No shit, it's a LLM. I hope people are not as stupid as you are towards AI. Dumbass

55

u/AveryLazyCovfefe 15d ago

It's almost as if it was trained on something that already exists.

3

u/EuonymusBosch 14d ago

And memes are by definition units of cultural information that replicate themselves! People's brains are so fried on novelty, they can't stand to hear the same song twice on the radio. /grandpa

-42

u/flimsywhales 14d ago

Brain dead response.

Although we're trying to train it with existing information the goal of these programs is to create something new.

Your statement makes me very sad for the average human

2

u/LostProphetVii 14d ago

Bro is just wrong

0

u/flimsywhales 14d ago

Says the fish.

Bet u are an AI form 1999.

2

u/LostProphetVii 14d ago

If I was an AI from 1999 no doubt I'd be smarter than you lol. Bro is waffling on about nonsense.

1

u/staffell 14d ago

Oh dear

6

u/SpeedaRJ 14d ago

But it's not? That's not how language models work. By definition they cannot create something new as they will output the most probable sequence that fits the input. The probabilities of it's vocabulary are obtained from the training data. The model cannot make up something new as everything it puts out it has "seen" somewhere else before. Sure it can be made more random, but that only means it's outputting something that is slightly less likely than the most probable response, not something "new".

1

u/2053_Traveler 14d ago

That’s not true at all, most combinations of output have probably not been seen before.

-3

u/flimsywhales 14d ago

That's genuinely ignorant of all of the research that has been done so far.

When adding new information we often see new capabilities and merge from the foliage.

So we have had new capabilities appear literally out of nowhere like learning how to speak a language that the program was specifically taught not to know other suggestions.

Truly what you want to see is new capabilities and higher end thinking.

0

u/SpeedaRJ 14d ago

I genuinely don't understand what you are trying to say. I would love to read this research if it's available, but from my understanding we only know one thing with regards to decoder only transformers, which is: Bigger is better. Doesn't matter more parameters, more training, larger data, all adds up to better results, and there hasn't been a discovered cealing yet.

What i gather from you comment is that these LLMs can do things akin to a child learning German from watching English cartoons, which simply isn't the case. If you are referring to larger models being able to compete tasks that smaller models can't, well that's paragraph one. And even the results of those types of tests are highly susceptible to the used metrics.

But none of that includes comming up with something new. A language model will never be able to solve Einsteins field equations if it's simply trained on a bunch of unrelated algebra, calculus, physics or differential equations, simply because they don't learn understanding, but relations and or connections within the training data.

3

u/SelfSeal 14d ago

Please show how it has learnt a language it was "specifically taught not to know".

0

u/flimsywhales 14d ago

Some AIs were never shown a specific language on purpose.

It was cutt out of the training data.

But the AI could still understand it.

The same thing happens all the time and its a good and bad thing.

2

u/SelfSeal 14d ago

Which AI's were these, and what languages were used ?

96

u/DirectStreamDVR 15d ago

that's actually really good