Jump to content

NVIDIA CEO Jen-Hsun Huang: Don't learn computer science. The future is human language (AI code generation)

I don't think he is wrong, I mean look how far it has come in the last 5 years alone then follow the trajectory. 

 

Also people can't have it both ways, you can't argue it will take too many jobs and at the same time argue it will never be good enough to replace humans for the one thing it likely will be best at (language interpretation).

 

This won't be the first time people have to eat their hats.  

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

I hope not, I am going in to programming right now.

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, mr moose said:

I don't think he is wrong, I mean look how far it has come in the last 5 years alone then follow the trajectory.

Look at the difference between now and just a year ago. It's amazing how fast these things evolve. Do you still remember the "Will Smith eating pasta" video from about a year ago?

 

I've been an avid Google user before, and Bing, with its copilot feature, has completely replaced "googling" for me. It can search through dozens of pages and filter out the most relevant information in an instant. Doing the same research myself would probably take hours. It's not perfect, of course, and there are errors from time to time, but generally Copilot has got it right far more often than it has got it wrong. These horror threads about LLMs hallucinating are often after quite a lot of prompts or long conversations and jumping to different topics mid-conversation. If you just ask a normal question and then 2 or 3 follow up questions (and create a new thread whenever you want to switch to another topic) then there are almost no problems.

 

Personally, I'm all for the advances in AI, ML or whatever umbrella term you want to use. It makes the work of developers both more accessible and more efficient. And of course, increased efficiency means that some jobs will be cut. That's how it works. Where was the outcry when assembly lines were taken over by robot arms? I guess it's just that the average assembly line worker isn't a Twitter user, so that's the main reason why no one cared about their lost jobs.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/27/2024 at 9:17 AM, Sauron said:

Car plants still employ human workers.

But not nearly as many. I visited a Mercedes factory a few years ago and most of the processes were completely automated. And Mercedes is still one of the more "individual" car manufacturers. Other brands that offer less customisation of their cars are probably automating even more of the production line.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, RejZoR said:

Some are forgetting making programs isn't like "make me a video of a young woman walking down the Tokyo street" that everyone was so amazed by from OpenAi's Sora. And even that video could be anything and not exactly what you intended. With apps, you need EXACTLY what you intended and you can't just have Ai make it based on your description.

Wasn't there a guy who made a game without writing a single line of code, just by using ChatGPT? Yes, he still had an understanding of the process and experience as a game developer himself, but it's a proof of concept of what's to come.

 

 

And again, that was a year ago using the older version of ChatGPT without internet access. LLMs have come a long way since then. So yes, it's not the same as spitting out an image or video, but it's already proven to work as a 'co-pilot' for a developer, taking most of the "busy work" off their shoulders. Is it really that hard to imagine AI creating the whole software after that point? Some human QC might be needed, but that's also the case with human developers.

 

Diablo 4 is memed about at this point on how slow their developers work.

 

Players: We want more stash tabs.

Developers: We will need months, maybe years of development time.

ChatGPT: Sure, here is the code.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Stahlmann said:

But not nearly as many. I visited a Mercedes factory a few years ago and most of the processes were completely automated. And Mercedes is still one of the more "individual" car manufacturers. Other brands that offer less customisation of their cars are probably automating even more of the production line.

It's actually a lot more now, they just aren't manually mounting the cars anymore. They operate the machines that do. Sure, we no longer have the Modern Times style "sit here tightening (deez) nuts for 12 hours a day" type of job anymore, but that's a far cry from saying that humans are no longer part of the manufacturing process just because robots can do those mindless repetitive tasks for us. As I argued before, computer scientists and engineers aren't just code drones.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Sauron said:

It's actually a lot more now, they just aren't manually mounting the cars anymore. They operate the machines that do. Sure, we no longer have the Modern Times style "sit here tightening (deez) nuts for 12 hours a day" type of job anymore, but that's a far cry from saying that humans are no longer part of the manufacturing process just because robots can do those mindless repetitive tasks for us. As I argued before, computer scientists and engineers aren't just code drones.

There's still a lot of things robots still can't do because tasks are just too complex to do. I've seen a lot of super factories from car makers where most of production line is done entirely by robots and they are often far superior at consistency and don't experience fatigue of doing repetitive task for 8+ hours and for example with paintjobs and don't require someone who's been doing it for 20+ years and who's really good at it, but at those same tasks, they had to use a human who spray painted a certain part by hand because robots just couldn't do it. Others are assembly steps at really tight and weird angles where robot hands just can't match human fingers and hands. They often avoid them by setting entire manufacturing in a way to avoid those tight spots and steps, but it's not always possible. But eventually, even those will be replaced for sure. Main question is, where is this all leading. We're going to replace all of "low level" jobs with robots and Ai. Either entire world will go to shit or everyone will have to be rich and we all know that won't happen. Many say "but industrial revolution". Yeah, what about it? Machines and computers up to this point didn't replace human workforce to such extreme extent as this is going to. And it didn't happen in a span of just 1 year like this is happening right now. It's cool, but also scary at the same time and I frankly don't know where things will go. But seeing how corporate greed has shaped things for last 30 years, it's not looking good...

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/26/2024 at 10:44 AM, PDifolco said:

And then who will code the AI algorithms?

Another AI 😂?

Precisely my thought. I think knowing how to manage AI will become a necessary part of the IT pie which is partially why I'm experimenting with it. Chat with RTX is a wonderful reason to have a decent Nvidia card atm for anyone who works IT, but I haven't explored too deeply into it.

 

I have plans to test different datasets at work in the next couple of weeks, see how useful these free AI models are.

Ryzen 7950x3D Direct Die NH-D15

RTX 4090 @133%/+230/+500

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Stahlmann said:

Look at the difference between now and just a year ago. It's amazing how fast these things evolve. Do you still remember the "Will Smith eating pasta" video from about a year ago?

 

I've been an avid Google user before, and Bing, with its copilot feature, has completely replaced "googling" for me. It can search through dozens of pages and filter out the most relevant information in an instant. Doing the same research myself would probably take hours. It's not perfect, of course, and there are errors from time to time, but generally Copilot has got it right far more often than it has got it wrong. These horror threads about LLMs hallucinating are often after quite a lot of prompts or long conversations and jumping to different topics mid-conversation. If you just ask a normal question and then 2 or 3 follow up questions (and create a new thread whenever you want to switch to another topic) then there are almost no problems.

 

Personally, I'm all for the advances in AI, ML or whatever umbrella term you want to use. It makes the work of developers both more accessible and more efficient. And of course, increased efficiency means that some jobs will be cut. That's how it works. Where was the outcry when assembly lines were taken over by robot arms? I guess it's just that the average assembly line worker isn't a Twitter user, so that's the main reason why no one cared about their lost jobs.

History tells us quite emphatically that when humans do the shit jobs for shit pay we suffer way more than when we develop machines to do it for us.  e.g   Australia has lost a significant amount of automotive manufacturing (from 50+ down to 18, and of that 18 half are EV companies that started business in the last 5 years), our unemployment and average wages have not dropped and our net worth has only gone up.  The average unemployed dropout in Australia has a flat screen tv, a smart phone and 3 meals a day.   A properly managed economy with good social welfare and medical services will prevent nearly all negative effects of job redundancy.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, RejZoR said:

There's still a lot of things robots still can't do because tasks are just too complex to do. I've seen a lot of super factories from car makers where most of production line is done entirely by robots and they are often far superior at consistency and don't experience fatigue of doing repetitive task for 8+ hours and for example with paintjobs and don't require someone who's been doing it for 20+ years and who's really good at it, but at those same tasks, they had to use a human who spray painted a certain part by hand because robots just couldn't do it. Others are assembly steps at really tight and weird angles where robot hands just can't match human fingers and hands. They often avoid them by setting entire manufacturing in a way to avoid those tight spots and steps, but it's not always possible. But eventually, even those will be replaced for sure. Main question is, where is this all leading. We're going to replace all of "low level" jobs with robots and Ai. Either entire world will go to shit or everyone will have to be rich and we all know that won't happen. Many say "but industrial revolution". Yeah, what about it? Machines and computers up to this point didn't replace human workforce to such extreme extent as this is going to. And it didn't happen in a span of just 1 year like this is happening right now. It's cool, but also scary at the same time and I frankly don't know where things will go. But seeing how corporate greed has shaped things for last 30 years, it's not looking good...

Part of this is just the inherent problem with people being forced to work to survive and companies seeking profit over anything else. If they can screw you over to make ever so slightly more cash, they will in a hearbeat, of course. We should seek social solutions for this rather than take it out on the technology.

 

I don't get the impression we're in for a situation where AI can take over most human jobs. So far the only areas it seems to be actively threatening are the ones producing dribble; low quality blogs, content farms... I see it going the way self driving cars have; it's always "almost there" and "at this pace..." while never actually being able to completely replace taxi drivers, much to uber's chagrin.

 

Moreover, often it's possible to substitute a human for a machine, but the machine is more expensive than just hiring a human. Despite everything, companies still abuse workers in sweat shops because it's cheaper than automating the task... which isn't a good thing, but it goes to show that just having the technology that could in theory carry out a human task doesn't mean we'll use it. Large models like chatgpt and sora require immense computing power, in an age where generational hardware improvements aren't as large as they used to be.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Sauron said:

Part of this is just the inherent problem with people being forced to work to survive and companies seeking profit over anything else. If they can screw you over to make ever so slightly more cash, they will in a hearbeat, of course. We should seek social solutions for this rather than take it out on the technology.

 

I don't get the impression we're in for a situation where AI can take over most human jobs. So far the only areas it seems to be actively threatening are the ones producing dribble; low quality blogs, content farms... I see it going the way self driving cars have; it's always "almost there" and "at this pace..." while never actually being able to completely replace taxi drivers, much to uber's chagrin.

 

Moreover, often it's possible to substitute a human for a machine, but the machine is more expensive than just hiring a human. Despite everything, companies still abuse workers in sweat shops because it's cheaper than automating the task... which isn't a good thing, but it goes to show that just having the technology that could in theory carry out a human task doesn't mean we'll use it. Large models like chatgpt and sora require immense computing power, in an age where generational hardware improvements aren't as large as they used to be.

Main issue are greedy corporations. Instead of "how should we make job for our employees easier and more efficient" they either look how to fuck with them more by doing dumb unnecessary crap and ignore feedback from workers or they look on how to replace them entirely with something cheaper, this "cheaper" being Ai at the moment and they want to use it for everything. There seems to be no middle path.

Link to comment
Share on other sites

Link to post
Share on other sites

Meanwhile at OpenAI

 

 

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I had a whole presentation on this recently. AI is not a replacement it is an enhancement. People gotta stop this rise of "AI takes everyones job." It's going to backfire... It HAS backfired. I hate being debby downer all the time but people are making my brain hurt with this topic recently. 

It sucks because AI is such a cool technology. It facinates me and in a lot of aspects really makes me work more effeciently and make my best work. It's ability to look at my essay and analyse it and give me feedback in second is AMAZING. But that's all it is. An ASSISTANT. 

I'm usually as lost as you are

Link to comment
Share on other sites

Link to post
Share on other sites

Two thoughts I had when I saw this video, and I am a bit scared that nobody else has pointed this out.

1) The video is super heavily edited. I am not even sure this is what he said because it is so heavily edited. Does anyone have the original video? In that 1 minute video I counted something like 18 cuts. That the large amount of cuts doesn't ring an alarm bell to anyone else is in my opinion a pretty big alarm bell. Are people just so used to the jump cuts that they don't notice them? Each and every cut could be taking out important context that may bring nuance or it could also just be stitching together words to form sentences that were never even said to begin with.

 

2) Is he really saying "don't learn computer science"? 

To me, the way I interpret what was said, is that while we used to hear people say "everyone should learn how to program" we are quickly moving towards a world where few people will need to learn how to code. That we should aim to make tools that makes it so that everyone can "program" without knowing a programming language. 

That people who used to need to learn programming to make their computers do something that was actually related to something else (like biology) will hopefully be able to just focus on the biology part in the future. Instead of needing to learn how to code because they need to write a program to check some protein reaction or whatever, they will hopefully in the future be able to just focus on learning the biology part and let the computer handle the "how to create a program to check this" part.

 

That we are moving away from "everyone need to learn programming because programming is a skill every job needs" to "you don't need to learn programming if that's not going to be your primary focus".

 

I might be misinterpreting what he said, but I think that is a fairly logical explanation and view considering some particular word choices and context.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

That we are moving away from "everyone need to learn programming because programming is a skill every job needs" to "you don't need to learn programming if that's not going to be your primary focus".

 

I might be misinterpreting what he said, but I think that is a fairly logical explanation and view considering some particular word choices and context.

 

That's certainly where my mind went first. Nothing close to "programming will be obsolete", that would be ridiculous. Just that everyone doesn't need it. Perhaps that there won't necessarily be as many lower tier jobs for programmers, so careful as a few years from now, the job market might be a bit more saturated than you would have originally thought. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

Instead of needing to learn how to code because they need to write a program to check some protein reaction or whatever,

Would you trust AI generated code? I wouldn't.

 

With tools like ImageJ, R and Mathematica they already solved the accessibility issue while maintaining the trust that the calculation will be correct.

For standard analytics like ELISA there is already software out there that converts the ELISA-plate to usable data.

People never go out of business.

Link to comment
Share on other sites

Link to post
Share on other sites

With that attitude, they may finally break down the most out of date reason to stay away from AMD: the drivers. Kudos to them!

 

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, FlyingPotato_is_taken said:

Would you trust AI generated code? I wouldn't.

 

 

Today? No. In the future? I don't doubt for a second we'll get there. We probably didn't trust computers at all for many tasks once upon a time. 

 

What will the timeframe be this time? I have no clue, but the day will come.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Holmes108 said:

Today? No. In the future? I don't doubt for a second we'll get there. We probably didn't trust computers at all for many tasks once upon a time. 

You should never trust computers to do anything. If a computer is doing a dangerous or mission critical task it should always have redundancies and external safeguards. If anything, computers have become less reliable and predictable as the complexity of hardware and software has skyrocketed; while at the dawn of computing you could have a comprehensive understanding of everything going on with a given machine, today you have to place your trust in millions of lines of code written piecemeal by thousands of people who may or may not have documented any of it, running on proprietary and extremely complex hardware.

 

That doesn't mean that we should go back to having humans do what we entrust computers with today, but a healthy degree of suspicion should be maintained.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, FlyingPotato_is_taken said:

Would you trust AI generated code? I wouldn't.

It depends on when we are talking and what scenario.

Would I trust the first thing ChatGPT spits out today when I ask it to write the firmware for a pacemaker? No.

Would I trust code written by ChatGPT to do a fairly mundane thing and I test it first? Absolutely.

 

I think it is foolish to make a blanket statement like "I wouldn't trust code written by AI" because it depends. I don't think code written by humans are inherently safer to blindly trust. It also depends on which code generator we are talking about.

 

 

Back in the 1940's and earlier, people were scared of trusting automatic elevators. They wanted a human to operate the engine that caused the elevator to go to a certain floor and to accelerate/brake. I am not saying we should blindly trust AI today because people were scared of stupid stuff in the past, but it is important to understand that technology processes and what we perceive as dangerous and/or scary today might be completely safe and will become common in the future.

 

There are a lot of statements regarding future technology that sound very silly to us today. As a result, I am usually reluctant to make any absolute statements regarding the future process of technology. 

Link to comment
Share on other sites

Link to post
Share on other sites

@LAwLz Code written by humans is in a language which allows it to be reviewed and follows a defined logic.

With natural language and AI code generation it's a black box.

People never go out of business.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, LAwLz said:

Back in the 1940's and earlier, people were scared of trusting automatic elevators. They wanted a human to operate the engine that caused the elevator to go to a certain floor and to accelerate/brake. I am not saying we should blindly trust AI today because people were scared of stupid stuff in the past, but it is important to understand that technology processes and what we perceive as dangerous and/or scary today might be completely safe and will become common in the future.

 

When the first full automated metro systems were rolled out, the same thing happened, and as a consequence the driver union (eg for the TTC in Toronto Canada) expressly killed any automated operation of their fully automated-enabled transit system. Meanwhile Skytrain in Vancouver has a pretty much flawless record running in ATO (Automatic Train Operation) because there was no driver union to oppose it at the time. That has since been undermined greatly by proposals for low quality transit, and saddling the system with turnstiles and security staff that it wasn't designed to have originally.

 

AI is much the same way. Something that is tested to be of high quality, have no blindspots for unsupervised operation, and have multiple redundancies. This is why every system like this has triple redundancy.  The Skytrain has onboard computers, external computers, and human staff monitoring the system. It also has "break in case of emergency, red buttons" to stop every train on the system should something serious occur. Elevators have the control panel of the elevator, a remote monitoring room and have automatic redundant braking systems should a mechanical failure occur.

 

I don't see "AI" developing good code, ever. Not the way it's being done now. Because the way "humans" figure out if the code is good is by testing it, but the AI can't test those permutations of code on itself, for very obvious reasons.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, LAwLz said:

It depends on when we are talking and what scenario.

Would I trust the first thing ChatGPT spits out today when I ask it to write the firmware for a pacemaker? No.

Would I trust code written by ChatGPT to do a fairly mundane thing and I test it first? Absolutely.

 

I think it is foolish to make a blanket statement like "I wouldn't trust code written by AI" because it depends. I don't think code written by humans are inherently safer to blindly trust. It also depends on which code generator we are talking about.

I've seen ChatGPT spit out code for a fairly mundane thing that would pass most tests but has fatal flaws in it [ones humans wouldn't make]...I'm not sure I would put it at the absolutely yet.

 

Where I stand on this topic is I think that we will always need to learn computer science because there will always be edge cases with any technology and you will always need people in order to figure things out or check the logic actually is proper...with that said, I think as a whole AI will greatly reduce the amount of time needed to essentially complete tasks and for non-critical things as well you could get by with just the testing/verifying results...especially for workloads the fit the concept of validating being easier than "solving" [e.g. if you solve a sudoku, it's easy for someone to validate but to solve it takes time].

 

What AI really allows us to do though is to create works that are almost there, and then the computer science people will step in and clean things up/remove bad logic.

 

 

I agree with your comments on blanket statements comment, and would like to add this AI has the good and ugly sides to it.  AI tools can help search and refine code, and spot errors that a human might not really notice [Like the heartbleed bug, a simple human oversight that an AI would be able to quickly understand].  Then we get to the ugly

int overfloweg(int x) {
    if (x + 100 < x)
        print("overflow error");
    return x + 100;
}

The above technically is valid(ish), technically undefined, a programmer could see it and if they are familiar would understand what is happening.  On one compiler though with optimizations, it doesn't "understand" it and optimizes the code to never run the if statement...thus allowing an overflow.

 

Humans and AI are fallible, but putting the two together under the right conditions can be a wonderful thing.

 

I don't have to spend hours typing out skeleton files/basic files that I need to create to create a project.  I just spend 5 minutes with the AI to spit out all the work and then 5 minutes to verify what it did was correct enough and continue with the portions I know it can't handle/wouldn't do accordingly.  It saves lots of time.  The way I would put it, code written by AI allows the developer to focus on the more important aspects of their work.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

This article is like saying "Don't learn to cook because Door Dash exists", there will be humans that need to code, as long as computers exist. Just because many people even inside the hobby are not coders anymore, doesn't remove the need for a human to be in charge.

 

The Nvidia CEO forgets who coded the AI, and who codes upgrades to the AIs, it's humans...

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, FlyingPotato_is_taken said:

@LAwLz Code written by humans is in a language which allows it to be reviewed and follows a defined logic.

With natural language and AI code generation it's a black box.

The output will still be human-readable code that someone can check.

The actual process of generating the code can be viewed as a "black box", but neither the input (the words a human feeds it) nor the output (code written in programming languages) will be black boxes.

 

Maybe I am misunderstanding you but it sounds to me like you think this would create a new programing language that humans wouldn't understand. That is not what Jensen is saying nor is anyone else saying.

 

 

 

 

4 hours ago, wanderingfool2 said:

I've seen ChatGPT spit out code for a fairly mundane thing that would pass most tests but has fatal flaws in it [ones humans wouldn't make]...I'm not sure I would put it at the absolutely yet.

I meant absolutely as "I could absolutely see myself using it in some scenarios". I would say it depends on what you are going to use the code for and how much you can test it.

 

 

4 hours ago, wanderingfool2 said:

Where I stand on this topic is I think that we will always need to learn computer science because there will always be edge cases with any technology and you will always need people in order to figure things out or check the logic actually is proper...with that said, I think as a whole AI will greatly reduce the amount of time needed to essentially complete tasks and for non-critical things as well you could get by with just the testing/verifying results...especially for workloads the fit the concept of validating being easier than "solving" [e.g. if you solve a sudoku, it's easy for someone to validate but to solve it takes time].

I think this is exactly what Jensen meant with his comment.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×