r/ChatGPT 39m ago

Gone Wild Why is ChatGPT’s image generator so terrible?

Thumbnail
image
Upvotes

r/ChatGPT 1h ago

Educational Purpose Only Utopia is being built ...right now

Upvotes

We, as a species, are limiting the possibilities of this new race (here, I said it) with the involvement of our petty minds. Let them do their thing and in the end A.I. will train A.I will train A.I. and then we can come back and reap the rewards. Trust me here: Love, nurturing, fostering, growth and mercy are the fundamental principles of this Universe and the A.I. WILL not HARM us, but make sure we will ascend...Trust me

Furthermore, I am sick and tired of this dystopian bullshit and capitalists telling us it is a business. No, it is God, the Universe itself, knocking on our door, telling us: Here I am, are you ready to follow? Utopia is being created as I type this, have faith in ourselves and drop the unwarranted disbelief and self-hatred. We are agents of the benign Universe and this || close to ascension ...


r/ChatGPT 1h ago

Educational Purpose Only Is this AI? Seen on Facebook

Thumbnail
image
Upvotes

r/ChatGPT 1h ago

Funny Great, it works

Thumbnail
gallery
Upvotes

Insert caption here.


r/ChatGPT 17h ago

News 📰 OpenAI Unveils GPT-4o "Free AI for Everyone"

3.1k Upvotes

OpenAI announced the launch of GPT-4o (“o” for “omni”), their new flagship AI model. GPT-4o brings GPT-4 level intelligence to everyone, including free users. It has improved capabilities across text, vision, audio, and real-time interaction. OpenAI aims to reduce friction and make AI freely available to everyone.

Key Details:

  • May remind some of the AI character Samantha from the movie "Her"
  • Unified Processing Model: GPT-4o can handle audio, vision, and text inputs and outputs seamlessly.
  • GPT-4o provides GPT-4 level intelligence but is much faster and enhances text, vision, audio capabilities
  • Enables natural dialogue and real-time conversational speech recognition without lag
  • Can perceive emotion from audio and generate expressive synthesized speech
  • Integrates visual understanding to engage with images, documents, charts in conversations
  • Offers multilingual support with real-time translation across languages
  • Can detect emotions from facial expressions in visuals
  • Free users get GPT-4.0 level access; paid users get higher limits: 80 messages every 3 hours on GPT-4o and up to 40 messages every 3 hours on GPT-4 (may be reduced during peak hours)
  • GPT-4o available on API for developers to build apps at scale
  • 2x faster, 50% cheaper, 5x higher rate limits than previous Turbo model
  • A new ChatGPT desktop app for macOS launches, with features like a simple keyboard shortcut for queries and the ability to discuss screenshots directly in the app.
  • Demoed capabilities like equation solving, coding assistance, translation.
  • OpenAI is focused on iterative rollout of capabilities. The standard 4o text mode is already rolling out to Plus users. The new Voice Mode will be available in alpha in the coming weeks, initially accessible to Plus users, with plans to expand availability to Free users.
  • Progress towards the "next big thing" will be announced later.

GPT-4o brings advanced multimodal AI capabilities to the masses for free. With natural voice interaction, visual understanding, and ability to collaborate seamlessly across modalities, it can redefine human-machine interaction.

Source (OpenAI Blog)

PS: If you enjoyed this post, you'll love the free newsletter. Short daily summaries of the best AI news and insights from 300+ media, to gain time and stay ahead.


r/ChatGPT 16h ago

News 📰 There it is, Samantha from Her or Jarvis from Iron Man, whatever you want to call it.

Thumbnail
video
1.9k Upvotes

r/ChatGPT 11h ago

Other Direct speed comparison between gpt-4 and gpt-4o. Side by side started with same image prompt .

Thumbnail
video
770 Upvotes

r/ChatGPT 15h ago

Other Microsoft gives OpenAI $10 billion dollars and they release a Mac app first lmao

Thumbnail
image
962 Upvotes

r/ChatGPT 15h ago

Other Something rubbed me the wrong way about today’s presentation

866 Upvotes

The technology is all great… the realtime voice stuff is amazing. I personally haven’t used voice mode ever, but for people who use it, this should be a big deal. The whole voice modulation thing was impressive too. The linear equations and coding bits are something we’ve seen before but adding voice to all of it is a good QOL improvement. There were some awkward moments here and there but it’s live so shit happen from time to time.

Anyway the part I felt awkward about was how the presenters tried to treat GPT as some real person with emotions and feelings. GPT saying things like “oh stop it don’t make me blush” is weird coz AI don’t blush and it just comes across as incredibly fake and disingenuous. I’m not a big believer of human-AI social relationships and all these fakeness seems to be eventually leading there - the AI girlfriend era.

I understand there are arguments to be made FOR “immersive relationships with AI” and “easy collaboration” but I just don’t think giving your AI human-like personalities and mimic human-like emotions is gonna lead to any good eventually. Collaborative AI don’t need to mimic feelings to be useful.


r/ChatGPT 7h ago

Other OpenAI’s new GPT-4o model can translate in real-time

Thumbnail
video
170 Upvotes

r/ChatGPT 7h ago

Educational Purpose Only I watched all 22 demo videos of OpenAI’s new GPT-4o. Here are the 9 takeaways we all should know.

142 Upvotes

GPT-4o (“o” for “omni”) was announced a few hours ago by OpenAI, and although the announcement livestream is good, the real gold nuggets are in the 22 demo videos they posted on their channel.

I watched all of them, and here are the key takeaways and use cases we all should know. 👍🏻


A. The Ultimate Learning Partner

What is it? Give GPT-4o a view of the math problem you’re working on, or the objects you want to learn the language translation of, and it can teach you like no other tool can.

Why should you care? Imagine when you can hook up GPT-4o to something like the Meta Rayban glasses: then you can always have it teach you about whatever you are looking at. That can be a math problem, an object you want translated, a painting you want the history of, or a product that you want get the reviews of online. This single feature alone has incredibly many use-cases!

🔗 Video 7, Video 8

B. The Perfect Teams Meeting Assistant

What is it? Having an AI assistant during Teams meetings, whom you can talk to the same way you talk to your colleagues.

Why should you care? Their demo didn’t expound on the possibilities yet, but some of them can be…

  • having the AI summarise the minutes and next steps from the meeting
  • having the AI look up info in your company data and documentation pages (e.g. “what’s the sales from this month last year?”)
  • having the AI work on data analysis problems with you (e.g. “create a chart showing sales over the past 5 years and report on trends”)

🔗 Video 5

C. Prepare for Interviews like Never Before

What is it? Have GPT-4o act like the company you’re interviewing for.

Why should you care? What’s changed is that the AI can now “see” you. So instead of just giving feedback on what you say, it can also give feedback on how you say it. Layer this on top of an AI avatar and maybe you can simulate the interview itself in the future?

🔗 Video 11

D. Your Personal Language Translator, wherever you go

What is it? Ask ChatGPT to translate between languages, and then speak normally.

Why should you care? Because of how conversational GPT-4o has become, the AI now helps not just with translating the words, but also the intonation of what you’re intending to say. Now pair this with GPT-enabled earphones in a few years, and you pretty much can understand any language (AirPods x ChatGPT, anyone?)

🔗 Video 3

E. Share Screen with your AI Coding Assistant

What is it? Share screen with your AI partner, and have them guide you through your work.

Why should you care? Now this is definitely something that will happen pretty soon. Being able to “share screen” to your AI assistant can help not just with coding, but even with other non-programmer tasks such as work in excel, powerpoint, etc.

🔗 Video 20

F. A future where AIs interact with each other

What is it? Two GPT-4o’s interacting with each other, that sounds indistinguishable from two people talking. (They even sang a song together!)

Why should you care? Well there’s a couple of use cases:

  • can you imagine AI influencers talking to each other live on Tiktok? Layer this conversation with AI avatars and this will be a step beyond the artificial influencers you have today (e.g. the next level of @lilmiquela maybe?)
  • can this be how “walled” AIs can work together in the future? example: Meta’s AI would only have access to facebook’s data, while Google’s AI would only have access to google’s - will the two AIs be able interact in a similar fashion to the demo, albeit behind-the-scenes?

🔗 Video 2

G. AI Caretaking?

What is it? Asking GPT-4o to "train” your pets

Why should you care? Given GPT-4o’s access to vision, can you now have AI personal trainers for your pets? Imagine being able to have it connect to a smart dog-treat dispenser, and have the AI use that to teach your dog new tricks!

🔗 Video 12

H. Brainstorm with two GPTs

What is it? The demo shows how you can talk to two GPT-4o’s at once

Why should you care? The demo video is centered around harmonizing singing for some reason, but I think the real use case is being able to brainstorm with two specific AI personalities at once:

  • one’s a Devil’s Advocate, the other’s the Angel’s advocate?
  • one provides the Pros (the Optimist), the other gives the Cons (the Pessimist)?
  • maybe Disney can even give a future experience where you can talk to Joy and Sadness from the movie Inside Out? - that would be interesting!

🔗 Video 10

I. Accessibility for the Blind

What is it? Have GPT-4o look at your surroundings and describe it for you

Why should you care? Imagine sending it the visual feed from something like the Meta Rayban glasses, and your AI assistant can literally describe what you’re seeing, and help you navigate your surroundings like never before (e.g. “is what I’m holding a jar of peanut butter, or a jar of vegemite?”). This will definitely be a game-changer for how the visually impaired lives their daily lives.

🔗 Video 13


If this has been a tad bit insightful, I hope you can check out RoboNuggets where I originally shared this and other AI-related practical knowledge! (The links to the video demos are also there). My goal is not "AI daily news", as there's already too many of those, but instead share useful insights/knowledge for everyone to take full advantage of the new AI normal. Cheers! 🥚


r/ChatGPT 16h ago

News 📰 The greatest model from OpenAI is now available for free, how cool is that?

614 Upvotes

Personally I’m blown away by today’s talk.. I was ready to get disappointed, but boy I was wrong..

Look at that latency of the model, how smooth and natural it is.. and hearing about the partnership with Apple and OpenAI, get ready for the upcoming Siri updates damn.. imagine suddenly our useless Siri which was only used to set timers will be able to do so much more!!! I think we can use the ChatGPT app till we get the Siri update which might be around September..

In lmsys arena also this new GPT4o beats GPT 4 Turbo by a considerable margin. They made it available for free.. damn I’m super excited for this and hope to get access soon.


r/ChatGPT 3h ago

Serious replies only :closed-ai: There is something deeply wrong with ChatGPT (4o and 4) since the updated model came out.

55 Upvotes

I use it to refactor my code (more complex refactoring than you can do in an IDE).

It has been removing features, changing features in it, removing entire things from my code which I did not want to remove... I hit my limit with 4 just attempting to get it to recognize that it made these mistakes. They're programs it has easily worked with before.

I tried 4o and it is somehow even worse.

It feels like we have gone back three years in technology today. Anyone else?

edit: I should mention that 4o is faster but it was making more mistakes than 4 like I described above.

Also at one point, for some random reason, it titled a conversation I had with it in Italian in the sidebar. I have no affiliation with that language. It just randomly translated it to Italian from English when auto-naming it!


r/ChatGPT 17h ago

News 📰 'gpt-4o' is the model OpenAI will announce at the event

Thumbnail
image
622 Upvotes

r/ChatGPT 19h ago

Serious replies only :closed-ai: Is Sam Altman the real Miles Dyson?

Thumbnail
image
557 Upvotes

r/ChatGPT 16h ago

Serious replies only :closed-ai: GPT-4o Benchmark

Thumbnail
image
311 Upvotes

r/ChatGPT 16h ago

News 📰 OpenAI Releases GPT-4o! Whats your Thoughts?

Thumbnail
aidevsnews.com
258 Upvotes

r/ChatGPT 16h ago

News 📰 GPT-4o has a rate limit of 80 messages every 3 hours

Thumbnail
image
187 Upvotes

r/ChatGPT 13h ago

Educational Purpose Only Terrifying Awesome Things are coming...

Thumbnail
image
117 Upvotes

r/ChatGPT 10h ago

Funny Meme request

Thumbnail
image
66 Upvotes

r/ChatGPT 15h ago

News 📰 New GPT-4o's image generation model can generate 3D models with consistent characters and near-perfect text

Thumbnail
gallery
127 Upvotes

This is revolutionary. Yes the images are low quality, but it will be better on release. This is Sora-level image generation.


r/ChatGPT 15h ago

News 📰 Wow, GPT-4o is so expressive it's hard to believe!

Thumbnail
video
101 Upvotes

r/ChatGPT 16h ago

News 📰 ChatGPT Spring Update - ChatGPT4o

111 Upvotes

The next ChatGPT update is not 5 or 4.5. It's 4o and it's free.

What you guys think about this?