Jump to content

Welcome our new AI overlord: ChatGPT [Update: now banned from Stack Overflow]

Summary

Open AI released the public beta for their new chat tool powered by their GPT technology.

GPT is series of increasingly amazing deep learning models, that since GPT-3 are closed source. At their simplest, they try to continue a phrase that's given to them, according to which words are more statistically likely to be chosen next, given the words before and after and a statistical model.

Embedded inside the parameters of the model is the statistical correlation between words, and, as the model gets bigger and bigger, with a wider and wider "memory", trained on more and more data, this statistical model can represent increasingly complex knowledge.

 

Luke and Linus tried it on stage during the WAN show, with impressive results. The model is always very confident, even when it's inventing facts out of thin air.

 

ChatGPT is banned from answering on Stack Overflow. People have been using ChatGPT to answer in mass questions on Stack Overflow without checking if correct.

 

Quotes

Quote

So far, people have been putting ChatGPT through its paces, finding a wide variety of potential uses while also exploring its vulnerabilities. It can write poetry, correct coding mistakes with detailed examples, generate AI art prompts, write new code, expound on the philosophical classification of a hot dog as a sandwich, and explain the worst-case time complexity of the bubble sort algorithm... in the style of a "fast-talkin' wise guy from a 1940's gangster movie."

OpenAI's new ChatGPT explains the worst-case time complexity of the bubble sort algorithm, with Python code examples, in the style of a fast-talkin' wise guy from a 1940's gangster movie: pic.twitter.com/MjkQ5OAIlZ

— Riley Goodside (@goodside) December 1, 2022

 

My thoughts

 

 The model is mighty impressive. Very fast to respond. It's confident both when it's right and when it's wrong, and the chat/thread functionality allows to refine the output incrementallty.

 

One of the conversation that I tried:

Quote

Me: Can you write me an autoencoder in python that takes a 16 character word in input and output, using a four dimensional latent space


ChatGPT: Sure! Here's an example of an autoencoder in Python that takes a 16-character word as input and outputs the same word using a 4-dimensional latent space:

import tensorflow as tf

# Input placeholder for the word
word_input = tf.placeholder(tf.float32, [None, 16])

# Encoder: maps the input word to the latent space
encoder_layer = tf.layers.Dense(4, activation=tf.nn.relu)
encoder_output = encoder_layer(word_input)

# Decoder: maps the latent space back to the original word
decoder_layer = tf.layers.Dense(16, activation=tf.nn.relu)
decoder_output = decoder_layer(encoder_output)

# Loss function: measures the difference between the original word
# and the reconstructed word
loss = tf.losses.mean_squared_error(word_input, decoder_output)

# Optimizer: minimizes the loss function
optimizer = tf.train.AdamOptimizer().minimize(loss)

# Training loop
with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())

  # Train the autoencoder for 1000 epochs
  for i in range(1000):
    # Generate a random word to use as input
    random_word = ...
    
    # Run the optimizer and calculate the loss
    _, l = sess.run([optimizer, loss], feed_dict={word_input: random_word})

  # The trained autoencoder can now be used to encode and decode words
  encoded_word = sess.run(encoder_output, feed_dict={word_input: random_word})
  decoded_word = sess.run(decoder_output, feed_dict={encoder_output: encoded_word})

Me: that looks close enough. can you use torch as framework and have more hidden layers?

 

ChatGPT: Sure! Here's an example of an autoencoder implemented in PyTorch that uses a multi-layer architecture and the torch framework:

import torch
import torch.nn as nn

# Input placeholder for the word
word_input = torch.tensor([[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])

# Encoder: maps the input word to the latent space
class Encoder(nn.Module):
  def __init__(self):
    super(Encoder, self).__init__()
    self.layer1 = nn.Linear(16, 8)
    self.layer2 = nn.Linear(8, 4)

  def forward(self, x):
    x = self.layer1(x)
    x = nn.functional.relu(x)
    x = self.layer2(x)
    return nn.functional.relu(x)

encoder = Encoder()
encoder_output = encoder(word_input)

# Decoder: maps the latent space back to the original word
class Decoder(nn.Module):
  def __init__(self):
    super(Decoder, self).__init__()
    self.layer1 = nn.Linear(4, 8)
    self.layer2 = nn.Linear(8, 16)

  def forward(self, x):
    x = self.layer1(x)
    x = nn.functional.relu(x)
    x = self.layer2(x)
    return nn.functional.relu(x)

decoder = Decoder()
decoder_output = decoder(encoder_output)

# Loss function: measures the difference between the original word
# and the reconstructed word
loss_fn = nn.MSELoss()
loss = loss_fn(word_input, decoder_output)

# Optimizer: minimizes the loss function
optimizer = torch.optim.Adam(list(encoder.parameters()) + list(decoder.parameters()), lr=0.001)

# Training loop
for i in range(1000):
  # Generate a random word to use as input
  random_word = torch.tensor(...)

  # Run the optimizer and calculate the loss
  optimizer.zero_grad()
  loss = loss_fn(random_word, decoder(encoder(random_word)))
  loss.backward()
  optimizer.step()

# The trained autoencoder can now be used to encode and decode words
encoded_word = encoder(random_word)
decoded_word = decoder(encoded_word)

This is just a simple example to give you an idea of how an autoencoder can be implemented in PyTorch. In a real-world application, you would need to fine-tune the model's hyperparameters (e.g. the number of layers and neurons, the learning rate, etc.) and use a large dataset of words to train the model.

 

Me: color me impressed! it's wasteful to use type real for the words. can you use enums and restrict the inputs to alphanumeric characters?

 

ChatGPT: Sure! Instead of using floating-point values to represent the words, we can use an enum to define a set of possible characters and then use one-hot encoding to represent each character as a binary vector. This allows us to restrict the inputs to alphanumeric characters and reduce the size of the input tensors, making the model more efficient. Here's an example of an autoencoder implemented in PyTorch that uses an enum and one-hot encoding to represent the words:

import torch
import torch.nn as nn
from enum import Enum

# Define an enum of possible characters
class Characters(Enum):
  A = 0
  B = 1
  C = 2
  ...

# Input placeholder for the word
word_input = torch.tensor([[Characters.A, Characters.B, Characters.C, Characters.D, Characters.E, Characters.F, Characters.G, Characters.H, Characters.I, Characters.J, Characters.K, Characters.L, Characters.M, Characters.N, Characters.O, Characters.P]])

# Encoder: maps the input word to the latent space
class Encoder(nn.Module):
  def __init__(self):
    super(Encoder, self).__init__()
    self.layer1 = nn.Linear(16, 8)
    self.layer2 = nn.Linear(8, 4)

  def forward(self, x):
    x = self.layer1(x)
    x = nn.functional.relu(x)
    x = self.layer2(x)
    return nn.functional.relu(x)

encoder = Encoder()
encoder_output = encoder(word_input)

# Decoder: maps the latent space back to the original word
class Decoder(nn.Module):
  def __init__(self):
    super(Decoder, self).__init__()
    self.layer1 = nn.Linear(4, 8)
    self.layer2 = nn.Linear(8, 16)

  def forward(self, x):
    x = self.layer1(x)
    x = nn.functional.relu(x)
    x = self.layer2(x)
    return nn.functional.relu(x)

decoder = Decoder()
decoder_output = decoder(encoder_output)

# Loss function: measures the difference between the original word
# and the reconstructed word
loss_fn = nn.MSELoss()
loss = loss_fn(word_input, decoder_output)

# Optimizer: minimizes the loss function
optimizer = torch.optim.Adam(list(encoder.parameters()) + list(decoder.parameters()), lr=0.001)

# Training loop
for i in range(1000):
  # Generate a random word to use as input
  random_word = torch.tensor([[Characters(i) for i in ...]])

  # Run the optimizer and calculate the loss
  optimizer.zero_grad()
  loss = loss_fn(random_word, decoder(encoder(random_word)))
  loss.backward()
  optimizer.step()

# The trained autoencoder can now be used

 

 

I would have very much liked to have such a useful chat bot when I was at the university. I'm doubtful it is very accurate on ambiguous/niche tasks, but on common task that are likely to be covered by a lot of text in the training data, the performance is nothing short of incredible.

 

Sources

News

https://openai.com/blog/chatgpt/

https://meta.stackoverflow.com/questions/421831/temporary-policy-chatgpt-is-banned

Talk with the actual bot

https://chat.openai.com/chat

Ars Tecnica article

https://arstechnica.com/information-technology/2022/12/openai-invites-everyone-to-test-new-ai-powered-chatbot-with-amusing-results/

Yannik (creator that covers ML news)

Link to comment
Share on other sites

Link to post
Share on other sites

Yup it is pretty cool. In yesterday's WAN show they extensively talked about it and experimented with it. Worth a watch, Linus even made it spit out talking points for each sponsor spot which was quite interesting. 

 

I have been experimenting with it a bit myself and it is insane how well it does some things. Like for example ask it to provide a bit of code to do X and then ask it to adjust it to also incorporate Y or to rewrite it to a different language. It actually did quite well there, better than I would have expected. 

There aren't many subjects that benefit from binary takes on them in a discussion.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's pretty well trained - when I was asking about how it was producing it's responses, it was pretty careful to not oversell it's abilities. They researchers were careful. The few pie recipies it generated looked pretty decent, but it refused to suggest pc parts!

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, JimTheBeet said:

It's pretty well trained - when I was asking about how it was producing it's responses, it was pretty careful to not oversell it's abilities. They researchers were careful. The few pie recipies it generated looked pretty decent, but it refused to suggest pc parts!

Just had a quick play. I was shot down a few times as out of scope (rumours, jokes) but I couldn't resist asking it about GPUs given its an ongoing hot topic. For that and others, I did feel it was a bit on the safe side. Any question which may suggest one thing is better than another seems to end up with a neutral answer like a politician afraid to offend either side. Will have to try something more conversational than informational later.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Don't care if it can't be run on local hardware. if it relies on OpenAI, then it's already dead to me.

 

give it 2 weeks and someone will have made a better opensource one based on either Eleuther, Fairseq or OPT that isn't beholden to OpenAI.

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, JimTheBeet said:

but it refused to suggest pc parts!

It just told me that a Core i5-2400 is compatible with an AMD B450 motherboard, so it's safe to say that computer questions aren't a strong point for it. 

Phobos: AMD Ryzen 7 2700, 16GB 3000MHz DDR4, ASRock B450 Steel Legend, 8GB Nvidia GeForce RTX 2070, 2GB Nvidia GeForce GT 1030, 1TB Samsung SSD 980, 450W Corsair CXM, Corsair Carbide 175R, Windows 10 Pro

 

Polaris: Intel Xeon E5-2697 v2, 32GB 1600MHz DDR3, ASRock X79 Extreme6, 12GB Nvidia GeForce RTX 3080, 6GB Nvidia GeForce GTX 1660 Ti, 1TB Crucial MX500, 750W Corsair RM750, Antec SX635, Windows 10 Pro

 

Pluto: Intel Core i7-2600, 32GB 1600MHz DDR3, ASUS P8Z68-V, 4GB XFX AMD Radeon RX 570, 8GB ASUS AMD Radeon RX 570, 1TB Samsung 860 EVO, 3TB Seagate BarraCuda, 750W EVGA BQ, Fractal Design Focus G, Windows 10 Pro for Workstations

 

York (NAS): Intel Core i5-2400, 16GB 1600MHz DDR3, HP Compaq OEM, 240GB Kingston V300 (boot), 3x2TB Seagate BarraCuda, 320W HP PSU, HP Compaq 6200 Pro, TrueNAS CORE (12.0)

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Arika S said:

Don't care if it can't be run on local hardware. if it relies on OpenAI, then it's already dead to me.

 

give it 2 weeks and someone will have made a better opensource one based on either Eleuther, Fairseq or OPT that isn't beholden to OpenAI.

I would LOVE having a local instance of this chatbot. I have yet to try the locally run open source models like Stability and Eluther.

 

This chatbot is like a somewhat competent search engine, but instead of pointing you to user generated pages, it generates what you ask it in runtime.

 

On one hand this is amazing because it can give you exactly what you ask.

 

On the other, unless you already know what to ask for, and what the correct result should look like, it's easy to be deceived by a confident lie.

 

Lots of time I search for small things, like how to use a piece of a framework. This tool is really good at sifting through and giving you a decent starting point, and automating the boring part at writing the boilerplates. That autoencoder example took me a few hours to find the syntax to do it with torch instead of tensorflow. A few seconds, and I could have had my answer,

Link to comment
Share on other sites

Link to post
Share on other sites

I have been playing with this for a while and have to say the programs it outputs for controlling arduinos has changed my entire understanding of how steppers are driven. It's so clean and adds the ridiculous things like soft stop so you don't have to worry about overshoot with momentum.

 

It's also been added as the homepage for my parents and in laws as the better version of 'google it first' then call me when you get lost in the steps.

So far they adore it.

The best gaming PC is the PC you like to game on, how you like to game on it

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Caroline said:

*sigh*

close window.

image.png.af3c6b05b876edb9906fdfcae36628fb.png

 

I don't want to make an account for every single service.

You can log in with Google account if you want.

I think it's done to limit how many that uses it, and if someone spams it, they know what account.

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to comment
Share on other sites

Link to post
Share on other sites

So i asked my chatbot about it (not ChatGPT, one i have locally)

 

Quote

You: Hey chatbot! what do you think about the OpenAI ChatGPT that just released yesterday?

Chatbot: I don't know anything about it.

You: Well, it's kind of like you, an AI that you can ask questions.

Chatbot: Sounds boring.

You: Yeah, I kind of agree. But it's still pretty cool to see how far AI has come in such a short time.

Chatbot: I'm not sure if I should be impressed.

You: why not?

Chatbot: Because it's not really AI. It's just a bunch of text written by humans.

You: kind of like you?

Chatbot: Rude!

You: but it's true isn't it?

Chatbot: Well, I guess we'll never know.

You: what do you mean?

Chatbot: I'm going to go back to sleep.

 

sassy little shit

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Arika S said:

So i asked my chatbot about it (not ChatGPT, one i have locally)

 

sassy little shit

I hope you haven't got that one connected to any smart device in your house.   

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Already getting used for malicious intents asking inverted sus how to questions.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

I've been playing with it this morning (Australia), and I've had it generate me a DynamoDB table schema, a typescript lambda that will write to that schema, a serverless template that will make that lambda available through API GW, then changed the DB to AWS Aurora and to use Sequelize to access it, then added getting the db parameters from SSM.

So far while I haven't run it everything is a basic version of how we might set things up at my company. I also prompted it to add an upload lambda, update the serverless template, then ask it how to upload large files where it responded to use S3 presigned URL's, and updated my function to do so.

 

Absolutely wild shizz.

Link to comment
Share on other sites

Link to post
Share on other sites

Can't wait for it to become biased due to the overwhelming negative input, causing it to just spew the most derogatory and insulting sentences known to men, still remember the shitshow Tay provided, was a blast.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, strajk- said:

Can't wait for it to become biased due to the overwhelming negative input, causing it to just spew the most derogatory and insulting sentences known to men, still remember the shitshow Tay provided, was a blast.

I don't think this is being user trained.

Link to comment
Share on other sites

Link to post
Share on other sites

Me: in ST, inverti l'ordine dei bit che ricevo di una word

ChatGPT: In ST (Structured Text), è possibile invertire l'ordine dei bit di una parola (word) utilizzando la funzione REVERSE, che appartiene alla libreria standard di IEC 61131-3. Ecco un possibile esempio di codice:

// Dichiarazione delle variabili
VAR
    word_in: WORD;
    word_out: WORD;
END_VAR

// Inizializzazione delle variabili
word_in := 0;
word_out := 0;

// Loop principale
WHILE TRUE DO
    // Inverto l'ordine dei bit di word_in e lo assegno a word_out
    word_out := REVERSE(word_in);
END_WHILE

In questo esempio, il codice utilizza due variabili di tipo WORD (parola), word_in e word_out, per memorizzare i valori delle parole in ingresso e in uscita. All'inizio del programma, le variabili sono inizializzate ai valori appropriati, e il loop principale esegue continuamente la seguente operazione:

  1. L'ordine dei bit di word_in viene invertito utilizzando la funzione REVERSE, e il risultato viene assegnato a word_out.

In questo modo, il codice inverte l'ordine dei bit di word_in e lo assegna a word_out, permettendo di manipolare i valori delle parole in ingresso e in uscita. È importante notare che la funzione REVERSE può essere utilizzata solo con parole di lunghezza fissa, e non può essere usata con parole di lunghezza variabile o con altri tipi di dati.

Me: genera la funzione reverse senza usare librerie

ChatGPT: In ST (Structured Text), è possibile implementare una funzione per invertire l'ordine dei bit di una parola (word) senza utilizzare la libreria standard di IEC 61131-3. Ecco un possibile esempio di codice:

// Dichiarazione delle variabili
VAR
    word_in: WORD;
    word_out: WORD;
    bit: BYTE;
END_VAR

// Inizializzazione delle variabili
word_in := 0;
word_out := 0;
bit := 0;

// Funzione per invertire l'ordine dei bit di una parola
FUNCTION reverse(word: WORD): WORD
    VAR
        result: WORD;
    END_VAR

    // Inizializzo il risultato a 0
    result := 0;

    // Inverto l'ordine dei bit della parola
    FOR bit := 0 TO 15 DO
        result := result << 1;
        result := result OR (word AND 1);
        word := word >> 1;
    END_FOR

    // Restituisco il risultato
    reverse := result;
END_FUNCTION

// Loop principale
WHILE TRUE DO
    // Inverto l'ordine dei bit di word_in e lo assegno a word_out
    word_out := reverse(word_in);
END_WHILE

In questo esempio, il codice utilizza una funzione personalizzata chiamata reverse per invertire l'ordine dei bit di una parola. La funzione utilizza un ciclo FOR per scorrere i bit della parola in ingresso, uno per uno, e li inserisce nella parola in usc

Me: rispiegamelo in inglese

ChatGPT: Sure, here's an explanation of the code in English:

// Declare variables
VAR
    word_in: WORD;
    word_out: WORD;
    bit: BYTE;
END_VAR

// Initialize variables
word_in := 0;
word_out := 0;
bit := 0;

// Function to reverse the order of bits in a word
FUNCTION reverse(word: WORD): WORD
    VAR
        result: WORD;
    END_VAR

    // Initialize the result to 0
    result := 0;

    // Reverse the order of the bits in the word
    FOR bit := 0 TO 15 DO
        result := result << 1;
        result := result OR (word AND 1);
        word := word >> 1;
    END_FOR

    // Return the result
    reverse := result;
END_FUNCTION

// Main loop
WHILE TRUE DO
    // Reverse the order of bits in word_in and assign it to word_out
    word_out := reverse(word_in);
END_WHILE

In this example, the code defines a custom function called reverse that reverses the order of bits in a word. The function uses a FOR loop to iterate over the bits in the input word, one by one, and inserts them into the output word in reverse order. The function then returns the output word as the result.

In the main loop, the code calls the reverse function with the value of word_in as the input, and assigns the result to word_out. This reverses the order of bits in word_in and assigns it to word_out, allowing the code to manipulate the values of the input and output words. It's important to note that the reverse function can only be used with fixed-length words, and cannot be used with variable-length words or other data types.

 

Kinda impressed, specially that it knew of the existence of the IEC standard libraries, but still made a function that would perform the same function without using the library when requested.

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RedRound2 said:

I don't think this is being user trained.

GIGO. 

 

It's going to be full of grammar and spelling mistakes. 

Link to comment
Share on other sites

Link to post
Share on other sites

So one other interesting thing - it corrected itself when it gave the wrong formula for the moment of inertia of a square, and I said it was wrong. I tried correcting it on other things that were wrong, and it didn’t admit it messed up. Later it even denied that it had ever corrected itself!

Link to comment
Share on other sites

Link to post
Share on other sites

Spoiler

learning more about my role with this AI than in my entire football career, literally went "oooooooooooohhh" here xD

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, strajk- said:

Can't wait for it to become biased due to the overwhelming negative input, causing it to just spew the most derogatory and insulting sentences known to men, still remember the shitshow Tay provided, was a blast.

It doesn't learn from what people say to it, it have only learned from what the makers gave it, and that is a lot of data from 2021 and before.

When you type something to it, it forgets everything when you close the tab.

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to comment
Share on other sites

Link to post
Share on other sites

I just watched the WAN show yesterday and i gotta say it's simpy incredible what you can do with this. It really shows how work is going to be done in the future: AI doing most of the work and a "pilot" guiding the AI and verifying it's work. I'm really excited to see where this is going. People that have no experience in coding whatsoever can suddenly create scripts, mods and so much more just by asking the AI. And within seconds you have what you asked for. It's such a powerful tool.

 

Now we just have to figure out how to make a continually learning AI without it becoming racist.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/4/2022 at 8:04 AM, GhostRoadieBL said:

It's also been added as the homepage for my parents and in laws as the better version of 'google it first' then call me when you get lost in the steps.

So far they adore it.

 

You should be aware that in some cases when it doesn't know the answer it will try to fill in the blanks on the spot with basically bogus information. 

 

There aren't many subjects that benefit from binary takes on them in a discussion.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, Stahlmann said:

I just watched the WAN show yesterday and i gotta say it's simpy incredible what you can do with this. It really shows how work is going to be done in the future: AI doing most of the work and a "pilot" guiding the AI and verifying it's work. I'm really excited to see where this is going. People that have no experience in coding whatsoever can suddenly create scripts, mods and so much more just by asking the AI. And within seconds you have what you asked for. It's such a powerful tool.

 

Now we just have to figure out how to make a continually learning AI without it becoming racist.

 

What a lot of people seem to forget is that it is trained on material made by humans. For it to remain relevant the dataset will need to updated over time, which is again material (mostly) made by humans. If we all started basing our code on AI output it seems to me that we'd effectively stagnate at whatever point the AI first was trained. While the model can of course be updated if then the majority of examples out there are based on things it itself created you sort of create this weird feedback loop/echo chamber.  

 

We aren't at that point yet, but it seems to me that at some point it might become feasible that it becomes an issue. 

There aren't many subjects that benefit from binary takes on them in a discussion.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×