Jump to content

OpenAI unveils GPT-4 in live demo

Catt0s

Summary

Today, OpenAI plans to unveil GPT4 in a live demo at 1PM PT (8PM UTC). It will support not only text, but images. OpenAI will pick suggestions from the stream chat to demo. GPT Plus subscribers will be able to preview it on ChatGPT's website. It has knowledge up to September 2021

 

Quotes

Quote

Despite its capabilities, GPT-4 has similar limitations as earlier GPT models ... it still is not fully reliable (it “hallucinates” facts and makes reasoning errors). 

Quote

- gpt-4 with an 8K context window (about 13 pages of text) will cost $0.03 per 1K prompt tokens, and $0.06 per 1K completion tokens.

- gpt-4-32k with a 32K context window (about 52 pages of text) will cost $0.06 per 1K prompt tokens, and $0.12 per 1K completion tokens.

Quote

GPT-4 access will gradually rollout to all ChatGPT Plus subscribers within the next few hours.

My thoughts

This is going to be incredible. With visual inputs, the capabilities will be immense. 

 

Sources

https://openai.com/product/gpt-4

https://openai.com/research/gpt-4

OpenAI developer updates

https://cdn.openai.com/papers/gpt-4.pdf

 

Link to comment
Share on other sites

Link to post
Share on other sites

I hope it's better with math. The current model, GPT 3 is bad with complex maths related to electricity. But at least it can admit when its wrong.

While Bing chat is just a cocky piece of trash that refuses to admit it's wrong. Then when you call it out on it, it just shuts down the conversation entirely instead of trying to learn.

Spoiler

Cos(64) is 0.438, yet that dumb as a rock AI kept saying it's wrong and it was -0.933.
Started by asking it to find the current, reactance and resistance of a circuit, then for the real, apparent and reactive power.
The 3 phase real power formula is P = √3 * VL * IL * Cos(fi), fi in this case was 64˚ (complex angle of the current).
I wasn't getting the same result as it using the same numbers it gave when used as part of the formula. Asked it what was up, I'm getting a different answer, "I'm correct, check your math or use a different calculator". Bruh...

Then I figured it was because it kept using -0.933 for cos(64) because it interpreted the degree symbol ˚ as pi.
As such, it was doing cos(64*pi).
Anyone who used complex polar equations know that pi has no place in there, it's just the number of the angle, straight up. That explained why I wasn't getting the same result as it.
Then it had the audacity to essentially say something like "I don't feel comfortable continuing this discussion" before shutting down when I just told it "you used -0.933 for cos(64), you should've used 0.438". Tried sending feedbacks, the page just sent me back to regular search and erased everything.)
I did manage to make it admit that cos(64) is indeed 0.438 before it shut down, but it just kept refusing to accept it was wrong for using that as part of the equation and saying it was right for using -0.933.

As much as I was hyped for Bing chat when Linus showed it, my experience with it for my needs, have been mildly infuriated to say the least.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, TetraSky said:

I hope it's better with math. The current model, GPT 3 is bad with complex maths related to electricity. But at least it can admit when its wrong.

While Bing chat is just a cocky piece of trash that refuses to admit it's wrong. Then when you call it out on it, it just shuts down the conversation entirely instead of trying to learn.

  Reveal hidden contents

Cos(64) is 0.438, yet that dumb as a rock AI kept saying it's wrong and it was -0.933.
Started by asking it to find the current, reactance and resistance of a circuit, then for the real, apparent and reactive power.
The 3 phase real power formula is P = √3 * VL * IL * Cos(fi), fi in this case was 64˚ (complex angle of the current).
I wasn't getting the same result as it using the same numbers it gave when as part of the formula. Asked it was was up, I'm getting a different answer "I'm correct, check your math or use a different calculator". Bruh...

Then I figured it was because it kept using -0.933 for cos(64) because it interpreted the degree symbol ˚ as pi.
As such, it was doing cos(64*pi).
Anyone who used complex polar equations know that pi has no place in there, it's just the number of the angle, straight up. That explained why I wasn't getting the same result as it.
Then it had the audacity to essentially say something like "I don't feel comfortable continuing this discussion" before shutting down when I just told it "you used -0.933 for cos(64), you should've used 0.438". Tried sending feedbacks, the page just sent me back to regular search and erased everything.)
I did manage to make it admit that cos(64) is indeed 0.438 before it shut down, but it just kept refusing to accept it was wrong and saying it was right for using -0.933.

As much as I was hyped for Bing chat when Linus showed it, my experience with it for my needs, have been mildly infuriated to say the least.

I hope so too, although I think they are mainly focused on improving safety, the visual inputs, and longer context. 

Link to comment
Share on other sites

Link to post
Share on other sites

updates from live stream:

- It will not do visual on release 

- long messages will be even more limited on release

- demoed creating a discord bot from scratch

- send an image of a sketch on paper for a website idea, and turn it into an image

- demoed sending it tax code and having it do taxes

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Catt0s said:

- demoed sending it tax code and having it do taxes

That's going to be fun. I now REALLY hope they fixed the maths issues.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

just go to the more complex math models. jk. so will there be an gta... gpt 5 and no 6th?

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah let Twitch chat control a new AI demo. What could possibly go wrong?

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

Already made up clips n shit.. this will be fun hah.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, TetraSky said:

I hope it's better with math. The current model, GPT 3 is bad with complex maths related to electricity. But at least it can admit when its wrong.

Agreed.  I found it to be strangely good with algebra or calculus but unable to deal with numbers.   

 

What does this possibly do to Bing though?  The thing about Bing is that the price is right.   On the other hand if I can put 52 pages of plain text into a Chat GPT 4 plus prompt ... I could say feed it all of the LATEfor a paper I am writing and see what it gives me back.  

Link to comment
Share on other sites

Link to post
Share on other sites

I bought ChatGPT Plus for a month just to try GTP-4 out.

 

I haven't used it much yet, but one of the tests I gave it was writing a D&D campaign that can be finished in 4 hours and then some follow up questions to flesh the campaign out. So far I am really impressed. I was impressed with ChatGPT 3.5 as well, but this got into more details and came up with, in my opinion, a much more interesting story. It also broke down each segment of the campaign into roughly how long time it thought each part would take, which 3.5 did not.

 

I think using images as input will be a game changer though. It wasn't that long ago that they opened up the APIs and some apps like Duolingo are already using it to power really cool features. The image aspect will open up even more cool stuff.

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/14/2023 at 3:05 PM, TetraSky said:

I hope it's better with math. The current model, GPT 3 is bad with complex maths related to electricity. But at least it can admit when its wrong.

While Bing chat is just a cocky piece of trash that refuses to admit it's wrong. Then when you call it out on it, it just shuts down the conversation entirely instead of trying to learn.

  Reveal hidden contents

Cos(64) is 0.438, yet that dumb as a rock AI kept saying it's wrong and it was -0.933.
Started by asking it to find the current, reactance and resistance of a circuit, then for the real, apparent and reactive power.
The 3 phase real power formula is P = √3 * VL * IL * Cos(fi), fi in this case was 64˚ (complex angle of the current).
I wasn't getting the same result as it using the same numbers it gave when used as part of the formula. Asked it what was up, I'm getting a different answer, "I'm correct, check your math or use a different calculator". Bruh...

Then I figured it was because it kept using -0.933 for cos(64) because it interpreted the degree symbol ˚ as pi.
As such, it was doing cos(64*pi).
Anyone who used complex polar equations know that pi has no place in there, it's just the number of the angle, straight up. That explained why I wasn't getting the same result as it.
Then it had the audacity to essentially say something like "I don't feel comfortable continuing this discussion" before shutting down when I just told it "you used -0.933 for cos(64), you should've used 0.438". Tried sending feedbacks, the page just sent me back to regular search and erased everything.)
I did manage to make it admit that cos(64) is indeed 0.438 before it shut down, but it just kept refusing to accept it was wrong for using that as part of the equation and saying it was right for using -0.933.

As much as I was hyped for Bing chat when Linus showed it, my experience with it for my needs, have been mildly infuriated to say the least.

Bing chat's best use seems to be towards comparing and contrasting different things. I found it to work well if you ask it how a piece of media relates to a social event, for example. 

 

22 hours ago, TetraSky said:

That's going to be fun. I now REALLY hope they fixed the maths issues.

"Why yes, the AI said I only owe 100 bucks in taxes this year officer"

 

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×