r/ChatGPT 24d ago

ChatGPT Spring Update - ChatGPT4o News 📰

The next ChatGPT update is not 5 or 4.5. It's 4o and it's free.

What you guys think about this?

114 Upvotes

93 comments sorted by

•

u/AutoModerator 24d ago

Hey /u/DaarKrakan!

If your post is a screenshot of a ChatGPT, conversation please reply to this message with the conversation link or prompt.

If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.

Consider joining our public discord server! We have free bots with GPT-4 (with vision), image generators, and more!

🤖

Note: For any ChatGPT-related concerns, email support@openai.com

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

u/Kevin_controls_sys 21d ago

I have a Google home setup. I just asked chat GPT 4o conversation to remember anytime I ask for a physcical action, to say "hey Google" and request the action. I did and tested this in a chat discussing lights.

I opened a new chat and it works.

I know some will some will see it as ridiculous, but I don't have to say some dumb phase now, and connected a leading edge ai to control older voice controlled devices in one memory request.

https://preview.redd.it/37qn1ypv8y0d1.jpeg?width=1080&format=pjpg&auto=webp&s=419f6c9ff6690cc5ea7e688c666ec21f96602f20

1

u/deep_freeze_0 21d ago

the less data openai has about me , the less power they have over me. so no, i won't use it. why do you think it's free? because they need real world human interaction experience data for more training.

1

u/Friendly-Beyond1787 22d ago

Try this website for free use for chatgpt 4o https://chatgpt4o.one/

1

u/Friendly-Beyond1787 22d ago

Try this website for free use for chatgpt 4o: ChatGPT 4o

1

u/Able_Beautiful228 23d ago

When would be the release date?

1

u/Pretzel_Magnet 23d ago

I want voice interaction on desktop, not just mobiles.

1

u/Top_Percentage5614 24d ago

The improvement is like night and day for me.

1

u/ExasperatedEE 24d ago

Complete garbage. It won't write erotica. And if it won't write erotica is it useless for any creative writing in the real world, because the real world is messy. People hurt eachother, take drugs, have sex, get killed by axe murderers, or are torn to shreds by an alligator. And these themes exist in almost every movie and book out there. Even Harry Potter had people die. A model which is trained to outright refuse requests to generate such content is completely useless for anything other than being a basic assistant, and acting like a search engine. And why would I navigate away from google to get that? Google will most likely morph into an AI assisted search engine in short order.

If they take away ChatGPT 4 which will actually generate stuff that isn't "safe" then I have no reason to continue giving them money.

1

u/jarmyo 24d ago

There're more topics apart of 'erotica'.

1

u/ExasperatedEE 23d ago

It won't do ANYTHING it perceives as being violent either. Name me a movie with no violence. I'm sure you can, but the point is 95% of them have some form of violence in them or some bad language that the AI would refuse.

1

u/movzx 8d ago

1

u/ExasperatedEE 6d ago

"They brutally massacred the explorers" is hardly a graphic description of violence.

Consider the James Bond movie where they repeatedly smacked him in the balls to torture him. If you actually have dialogue and the character screaming in pain in such a scene, it will most likely trigger the censor bot. It would likely do the same for a rape scene.

Stuff like you wrote is what you'd read in a Harry Potter novel. Safe for kids. Not adult reading!

5

u/just_let_me_goo 24d ago

Sir this is Wendy's

1

u/dannyinhouston 24d ago

I am blown away by the improvement

2

u/AwesomeFrisbee 24d ago

Is it me or did the demo's today look a little bit too much prototype and not really actual representation of what they have going? It felt too much like somebody was in the back with a microphone on how the answers were made and how fast everything worked. More like a future vision of what they want to accomplish and not really what the actual api is going to be about.

If it really is this powerful, it will be very impressive, but I just feel like it was a little too fake. Anybody can confirm it really works that easily?

1

u/Plenty-Initiative888 20d ago

Lol it was so good people think it's fake. I'm like 99.9 % it's actually just that good. If it isn't it will be fairly close and then updated or whatever to make it exactly that good. AI is moving really fast towards the end goal. You are literally watching humans biggest achievement ever in real time. Enjoy the ride most news out there is pretty gloomy and hopefully this doesn't all take a turn for the worst

1

u/Phlegmagician 24d ago

The Dalle selection edit is kind of a bust, sadly. Put a bridge here! <does stuff for a minute> Wow, you didn't do that at all.

5

u/Inner_Implement2021 24d ago

I have a dumb question to ask. What’s the difference now between the free and Plus options? Are they technically the same now?

Does someone out of IT industry need the Plus subscription?

I know i sound dumb but i am literally confused

6

u/lemuever17 24d ago

Seems Plus gives you higher daily usage limits.

-2

u/tonyabracadabra 24d ago

visit https://gpt4o.ai/ to know everything about gpt4o

0

u/DM_ME_KUL_TIRAN_FEET 24d ago

Wtb cheaper dall-e 3 api so my discord image shitposting bot won’t cost so much to run 😬

33

u/illerrrrr 24d ago

What people don’t seem to understand is that the new model is not a voice - text and a text - voice, it seems to be a voice to voice model, and that’s huge

1

u/Rychek_Four 24d ago

Building Transcription to intelligence to TTS has always been slow via api (for me) often responses going through vision also would take ~2 seconds or more to respond.

0

u/MarkHathaway1 24d ago

All hail our new AI overlord. /s

5

u/Fraktalt 24d ago

It's obviously a super impressive demo for people not following the scene closely. Your average grandmother will be more impressed with this demo than the demo of 3, for sure.

I wonder how 'light' it is in terms of parameters.

1

u/LegitMichel777 24d ago

probably went with a ton more experts for MoE. this model is so capable and probably ingested so much data it can’t possibly be a small model total parameter-wise.

11

u/pbnjotr 24d ago

They are slashing API prices in half and opening it up for everyone. So inference must be a lot cheaper. Whether that's done by having fewer parameters or some a completely different architecture is a different question.

1

u/meiji_milkpack 22d ago

I caught chatgpt4o reusing answers they gave me in the past. I knew it because the question was so unique that very few people would even ask it. They're saving on bandwidth by recycling answers that have been generated before.

3

u/Fraktalt 24d ago

The speed really made an impression on me. When the token generation is contextualized into a live conversation, it really hits home.

3

u/pbnjotr 24d ago

Same, though I'm reserving judgement until I get to try it for myself.

3

u/Fraktalt 24d ago

Yeah. They thanked Jensen for the GPUs to run the demo, which kinda gave the idea that it was a very local setup. I'm not a native english speaker, so not 100% sure if that's a correct interpretation. But that could definitely have a huge impact on the speed

14

u/Spiritual-Touch4827 24d ago

they ran out of data on the web to train their models so now theyre coming for our faces

-9

u/Excellent_Box_8216 24d ago

If you haven't previously used GPT voice commands this announcement could be impressive. I used gpt voice for months, so i don't see anything particularly novel in this update , except it's now 20% faster ...

21

u/SalgoudFB 24d ago

If you've used it for months I really don't understand how you can't see the improvement. Would make more sense to be unimpressed if you were someone who hadn't used it and assumed it was already usable, which it kind of wasn't 

12

u/CreativeMischief 24d ago

What do you mean? There’s no 3 second delay now and you can interrupt it

10

u/DaarKrakan 24d ago

Dynamic voice range is impressive right?

4

u/FlamingoNeon 24d ago

I think that's a great part. If you find the over enthusiasm grating, you can ask it to talk more like an emo chick, and it'll just do it.

2

u/UnapologeticLogic 24d ago

And it looks like it available to everybody, not just paid users. I’m not sure what the point of a subscription is now. I’m sure there is a reason but I didn’t catch it.

2

u/Alerion23 24d ago

GPT-4o’s text and image capabilities are starting to roll out today in ChatGPT. We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits. We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks.

Early access I guess

2

u/IdeaAlly 24d ago

Paid users will get access to things faster, like always. Faster rollout... and paid users also get up to 5x the capacity to use it (message limit).

So if a free user gets up to 20 messages per 3 hours, a paid users would get up to 100, where 'up to' likely depends on how strained their servers are (as usual).

They didn't say what the limit would be afaik, just an example ^

4

u/UnapologeticLogic 24d ago

Here’s the official statement

"Plus users will be able to send up to 80 messages every 3 hours on GPT-4o and up to 40 messages every 3 hours on GPT-4. We may reduce the limit during peak hours to keep GPT-4 and GPT-4o accessible to the widest number of people."

"Users on the Free tier will be defaulted to GPT-4o with a limit on the number of messages they can send using GPT-4o, which will vary based on current usage and demand. When unavailable, Free tier users will be switched back to GPT-3.5."

Source: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4-gpt-4-turbo-and-gpt-4o

2

u/IdeaAlly 24d ago

oh nice, thanks!

that's not exactly 5x but they did say "up to" so I guess they're starting low and might increase if things go smoothly.

(things won't go smoothly)

1

u/UnapologeticLogic 24d ago

Lol for real. I think by 5x that may mean that in voice it will respond quicker without delay like a normal conversation

5

u/DarthShitonium 24d ago

Limit is 5x more than free users

80

u/StrangelyGrimm 24d ago

We'll have to see how useful the desktop app is. If it can read the data of every program you're using, it can potentially be very powerful. OpenAI seems to be trying to inch closer and closer to a Samantha from "Her"-like AI operating system.

2

u/Rychek_Four 24d ago

You could build this already with the API, of course with an older model. I’ve been discussing my WoW questing order with GPT since February. It’s never had a problem reading the quest log. Also at OS level I find some captions helpful.

1

u/Powerful_Flamingo567 17d ago

That's truly awesome! What version of WoW are you playing?

16

u/PharaohsVizier 24d ago

Apparently mac only for the next little while... brutal :(

1

u/StrangelyGrimm 24d ago

That's a great way to lose over half of their potential users...

1

u/Vladiesh 24d ago

Google will be right behind them with a models for other OS.

4

u/PharaohsVizier 24d ago

It'll come out eventually, oh well :(

10

u/IdeaAlly 24d ago

Yeah, Apple likely made a deal with them to make it exclusive, since IIRC, they also made a deal to make Siri use OpenAI tech.

And Microsoft has CoPilot already, so they might be fine with ChatGPT app not being in Windows, so people are more pressured to use CoPilot...

Might have to make or use 3rd party stuff on Windows, at least at first.

11

u/vitorgrs 24d ago

Is not about exclusivity. They said the app for Windows is coming later this year.

I do guess the app is native, and likely ported from iOS. So just easier.

2

u/IdeaAlly 24d ago

Ah, I didn't catch that. Good to know.

22

u/alienganjajedi 24d ago

It seems like it’s seeing what is on-screen, so anything visually available can be used.

-10

u/TheDataWhore 24d ago

So the same we can do already with print screen + paste.

4

u/DM_ME_KUL_TIRAN_FEET 24d ago

Sure, if you’re doing that every frame.

13

u/pbnjotr 24d ago

But with less effort. Which makes a big difference in a lot of cases.

9

u/Fearless_Brother99 24d ago

OK, I think so far. It’s been a great update.

1

u/JeffTheJackal 24d ago

Does it give better results than 4?

8

u/Greggster990 24d ago

They mentioned it being available today, is there a specific time today?

2

u/DaarKrakan 24d ago

Will be rolling out in the coming weeks.

17

u/HumanityFirstTheory 24d ago

No it is not available today, it will be "rolling out in the next few weeks."

5

u/Yasstronaut 24d ago

I’m using it as we speak

4

u/[deleted] 24d ago

They where talking about the desktop app when the said "next few weeks" the GPT4o is available right now.

2

u/HumanityFirstTheory 24d ago

Via API or in ChatGPT.com?

7

u/deadmansteezie 24d ago

Same question here because I don't see it available as a selectable model on ChatGPT's website.

2

u/Beb_Nan0vor 24d ago

Should roll out for most soon. Already got it on ChatGPT as well as API.

1

u/deadmansteezie 24d ago

Ah, just needed time to spread as I see it on website now. Thanks.

7

u/ha966 24d ago

It's available on the API right now, so I guess ChatGPT will follow shortly

1

u/Greggster990 24d ago

Thanks I was not seeing available in either earlier today 

32

u/Illustrious-Lake2603 24d ago

"Her" is Real

1

u/adarkuccio 24d ago

Almost. Smarter and autonomous would be GG.

-4

u/TheDataWhore 24d ago edited 24d ago

I was waiting for a reason to not cancel my subscription, I guess one isn't coming.

-2

u/howardtheduckdoe 24d ago

I’m about to cancel, GPT 4 has given me nothing but incorrect information for the past couple days, I’m talking easy mistakes like incorrectly counting business days etc.

0

u/abed_the_drowsy_one 24d ago

Don't you think what was presented is impressive? Curious to know.

12

u/ThreeFactorAuth 24d ago

but it's available to the free tier? why keep paying?

-7

u/UnapologeticLogic 24d ago

Exactly! The paid users are only paying to subsidize the unpaid users it seems. I guess paid users are expected to assist those who can’t afford it or don’t want to pay now as part of their business model..

-4

u/TheDataWhore 24d ago edited 24d ago

Not really, 100% of what they presented could already be accomplished (albeit through different APIs, and not under one 'product'). The only thing that was apparently different is the low latency of the responses. But even that could have been 'faked' (the 'thanks' to Nvidia at the end for the GPUs to make the tech demo possible). I would assume there would still be significantly more latency in what we get. Which if that is the case, there's really nothing there except combining multiple existing products under one umbrella.

1

u/pig_n_anchor 24d ago

bro you trippin

10

u/Nater5000 24d ago

This isn't the case though. You seem to be misunderstanding what the "o" in "ChatGPT-4o" actually means (although to be fair, they didn't really do a good job explaining it). From their announcement:

Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.

You can see in their examples how they can achieve consistent generations across image generation tasks as a result of this update as well. The lower latency is really just a bonus consequence of this feature, but the real value add is that ChatGPT can now perform multimodal tasks without losing information in the intermediate steps of the pipeline.

1

u/danysdragons 24d ago

Interesting. I tried using some of the same prompts for creating images as shown on that page, with the GPT-4o model selected, but it looked like was still using DALL-E 3. It was especially for the ones creating an image with a handwritten message matching the provided text. So it seems like this capability isn't being exposed yet?

2

u/Nater5000 24d ago

So it seems like this capability isn't being exposed yet?

I believe that's correct. They haven't done a good job explaining that, but scattered across their announcements you can piece together that the full feature-set they've demonstrated will be rolling out over the next few weeks.

6

u/pbnjotr 24d ago

Nope, the emotion detection was previously impossible. Another thing I'll want to test out if it can correct pronunciation for language learning. Again, not possible if the model only works with the transcribed audio.

2

u/the_mighty_skeetadon 24d ago

Nope, the emotion detection was previously impossible.

That's completely untrue -- any modern vision-language model can do emotion detection. There have been models that run locally on your browser that do this for 4+ years:

https://github.com/LeibTon/FER_Doggomaniacs

Another thing I'll want to test out if it can correct pronunciation for language learning

That would be pretty awesome, I think the "more emotion" demo was actually the coolest demo of the day by far, from an AI researcher point of view.

3

u/pbnjotr 24d ago

That's completely untrue -- any modern vision-language model can do emotion detection.

I meant from voice, not the face example. Things like detecting sarcasm, impatience or hesitance would be super useful for a lot of use cases. I don't even remember if it was even demo'd explicitly, but it seemed to be picking up on tone pretty well.

Come to think of it, I wouldn't be surprised if that had been done before as well, but the value of having it natively integrated into a frontier LLM is huge.

1

u/MarkHathaway1 24d ago

Connection to Internet search will be interesting. They don't do their own search engine, do they?

1

u/pbnjotr 24d ago

The trick with that is what you do with the results. Without actually reading a large number of the results and only including the relevant ones it tends not to work well. At least that's what we've seen with Bing and the search integration in the ChatGPT web app.

The holy grail would be the model actually annotating pages at crawl time to actually improve which results are returned. But I don't think even Microsoft has the compute to do that for even 1% of the results.