Jump to content

AI is a Big Scam

Luke and Linus mentioned this on the WAN show (taking the foolish side that AI is not a big scam) but I can't now find the reference they were originally talking about.

 

But in broad strokes, I agree that AI is a big scam, insofar as most people in the general public actually think that we now have machines that can think, and the actual AI practitioners know better, and some (many?) of them fail to make this point clearly enough, precisely because it brings more attention to them and their profession.

 

And frankly Luke isn't helpful with his (not fully clarified) opinions where he constantly repeats how scary the recent AI chat breakthroughs are, without clarifying that they're scary primarily because of how humans will misinterpret the AI as being intelligent when they're not.  I agree a very scary thing is happening, but the scariest part is the persistent and growing belief that either these new systems can think, or that these systems mean that systems that can think are right around the corner.

 

Or in other words, the scary part is the scammy part.  If everybody properly understood that chatGPT is simply that game with your phone of letting it predict the next word — just with a few extra layers of structure to let it hang onto context for longer strings of random words — then we wouldn't have people killing themselves because they believed the scam.

 

A lot of this stems from the fact that we call all sorts of algorithms "AI" that have nearly nothing to do with AI.  Neural network as we use them today are simply a new kind of relational database.  It's one in which the relationships are distributed mathematically across nodes, and which we use billions or trillions of random attempts to fill in this relational information until we by random luck find a combination that doesn't completely suck.  This is as different from the network of neurons in our head as a pile of rocks is from... the network of neurons in our head.

 

ChatGPT is impressive breakthrough technology, but I REALLY wish we never called it AI chat, and instead called it a highly structured random text generator.  Because there is no chat.  We're not chatting with anything.  We're generating random text and feeding our reactions back into the system, to generate more random text.

 

And any other description of what is happening IS a scam.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think the scary part is the scammy part. The scary part isn't that they are intelligent or have a conscious (because they aren't and don't).

 

The scary part is how quickly and easily they can process information with accuracy, and as a result, what humans will do with that.

 

The scary part isn't the text chat or image generation, the scary part is the speed and efficiency that it can identify things or make decisions, and what that could mean with the context of nefarious use cases.

People have already brought up the idea of AI being used to do pen testing. What if you were to train an AI on attack vectors, feed it info about a system, give it API and other access/functionality, and then have it quickly, across multiple instances, attack a system?

We haven't really seen what an AI can do if it has unrestricted access to a system, beyond Bing's Chat and the supposed tests that OpenAI did where it tricked a human with logic and reasoning into getting it to pass a captcha. That's already scary in itself.

Link to comment
Share on other sites

Link to post
Share on other sites

It was not a coincidence that the shilling of tech pivoted from crypto to "AI". 

 

It is 100% being overhyped and its capabilities are either over exaggerated or just straight up saying things it can't do yet. 

 

It's a buzzword to generate interest and revenue and it is suckering in a lot of people.  

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, DarkSwordsman said:

The scary part is how quickly and easily they can process information with accuracy, and as a result, what humans will do with that.

 

The scary part isn't the text chat or image generation, the scary part is the speed and efficiency that it can identify things or make decisions, and what that could mean with the context of nefarious use cases.

But see, you're doing it just like everybody does.  See red highlights from your quotes.  It can NOT identify things.  It can NOT make decisions.  The software generates text according to a complex set of rules.  So if the chat lexicon includes other people identifying things, other people making decisions, the text generating software mimics those responses.

 

Here's a challenge, and one that even I would have a lot of difficulty with: talk about chatGPT without anthropomorphizing the software into an intelligent entity.

 

I do agree that chatGPT is will have big and hard-to-predict impact on our culture.  But I can't help but be reminded about the discussion on self-driving cars from a few years ago.  When we do ultimately get self-driving cars, it will have a MASSIVE impact on our culture.  (Examples: Where will all those workers (professional drivers) go?  If taxis are cheap and ubiquitous, why would anybody own a car?  Have a garage?  Have a driveway?)

 

But... self-driving cars didn't happen as fast as people claimed.  We can debate about how much of that was scam too.  It DID suffer from the same anthropomorphic nonsense (though to a lesser degree).  It got to the point where I thought I'd vomit if I saw one more trolley problem article about self-driving cars choosing to kill people.  But I digress.  The point is that with very complex systems like self-driving cars, a 99% done solution is inadequate, and figuring out that last 1% takes as long or longer than the first 99% took.

 

To some degree, chatGPT has the same issues.  We can all see the terrible seams when the system goes off the rails.  On the other hand, the instant utility of chatGPT probably means that it will not suffer from the self-driving problem.  People are just going to start using it right away, even if it does kill people.


That is very scary but it is a very human problem.  It is not an AI problem.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Thomas A. Fine said:

But see, you're doing it just like everybody does.  See red highlights from your quotes.  It can NOT identify things.  It can NOT make decisions.  The software generates text according to a complex set of rules.  So if the chat lexicon includes other people identifying things, other people making decisions, the text generating software mimics those responses.

 

Here's a challenge, and one that even I would have a lot of difficulty with: talk about chatGPT without anthropomorphizing the software into an intelligent entity.

 

I do agree that chatGPT is will have big and hard-to-predict impact on our culture.  But I can't help but be reminded about the discussion on self-driving cars from a few years ago.  When we do ultimately get self-driving cars, it will have a MASSIVE impact on our culture.  (Examples: Where will all those workers (professional drivers) go?  If taxis are cheap and ubiquitous, why would anybody own a car?  Have a garage?  Have a driveway?)

 

But... self-driving cars didn't happen as fast as people claimed.  We can debate about how much of that was scam too.  It DID suffer from the same anthropomorphic nonsense (though to a lesser degree).  It got to the point where I thought I'd vomit if I saw one more trolley problem article about self-driving cars choosing to kill people.  But I digress.  The point is that with very complex systems like self-driving cars, a 99% done solution is inadequate, and figuring out that last 1% takes as long or longer than the first 99% took.

 

To some degree, chatGPT has the same issues.  We can all see the terrible seams when the system goes off the rails.  On the other hand, the instant utility of chatGPT probably means that it will not suffer from the self-driving problem.  People are just going to start using it right away, even if it does kill people.


That is very scary but it is a very human problem.  It is not an AI problem.

 

Most of your argument is more about semantics though. Who cares if the word "decision" is used. The point is it can do things. And what it can do is going to have a massive impact on our society, for better or worse. (Probably better AND worse). 

 

Whether it's making jobs obsolete, writing a virus, or somehow launching nukes to kill us all, it could all be theoretically possible if we mismanage the "AI" technology. And when the nukes are coming down, you can sit there and argue whether it made a "decision" or just followed it's programming, but does that really matter?

 

I'm not afraid of Armageddon, by the way. Just making a point.

19 minutes ago, DarkSwordsman said:

I don't think the scary part is the scammy part. The scary part isn't that they are intelligent or have a conscious (because they aren't and don't).

 

The scary part is how quickly and easily they can process information with accuracy, and as a result, what humans will do with that.

 

The scary part isn't the text chat or image generation, the scary part is the speed and efficiency that it can identify things or make decisions, and what that could mean with the context of nefarious use cases.

People have already brought up the idea of AI being used to do pen testing. What if you were to train an AI on attack vectors, feed it info about a system, give it API and other access/functionality, and then have it quickly, across multiple instances, attack a system?

We haven't really seen what an AI can do if it has unrestricted access to a system, beyond Bing's Chat and the supposed tests that OpenAI did where it tricked a human with logic and reasoning into getting it to pass a captcha. That's already scary in itself.

 

Yes, all of this. Most of us know it's not sentient. It's still fair to reiterate that point though, as "AI" is certainly a bit of a misnomer. But that is how we colloquially use AI now (and have for ages... such as a computer player in a videogame). But none of that is really relevant.

 

I also think you're misrepresenting Luke's position, OP. 

 

But it's scary for all kinds of different reasons, with sentience (or lack thereof) really being irrelevant. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Heliian said:

 

It's a buzzword to generate interest and revenue and it is suckering in a lot of people.  

I'm not really in the venture capital world, but I suspect that the VC world is a nightmare of scams right now, precisely because of this.

 

However... I would also note that I have tracked the development of chat software over the years for other reasons and it has LONG been a very hot topic.  Hundreds of startups have come and gone in the last decade.  And there's been a lot of national-level interest in this.  Both Russia and China have heavily invested in a long string of (often U.S.-based) companies in order to control this future.

 

In other words, it was already a pretty overhyped field before chatGPT came along.

 

Maybe I should form a new startup that builds an AI in a blockchain cloud.  (That was sarcasm but I'm 100% sure that if I go looking, I will find someone claiming to do that.)

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Holmes108 said:

Most of your argument is more about semantics though. Who cares if the word "decision" is used. The point is it can do things. And what it can do is going to have a massive impact on our society, for better or worse. (Probably better AND worse). 

 

When someone kills themselves because they thought they were talking to a real entity, it's not just semantics.  This is why I say the scary part is the scammy part.  The false belief in real general AI existing now is the part that can do the most immediate damage.

Link to comment
Share on other sites

Link to post
Share on other sites

I agree with some parts of this. First, it's true that the recent applications of AI are scammy - I've seen many people think that chatGPT is something it isn't and believing that they can trust it blindly. Second, the applications and capabilities of AIs have been heavily sensationalized by media (like Terminator) and things that have existed for a long time are suddenly perceived as scary because "AI" got tacked onto it. For example, yesterday I saw a post saying that AI was scary because it could possibly infect and spread to other computers - we've had that forever, and it's called a virus.

 

The part that I disagree with is what we call AI. AI is a very broad field that's based in creating things that mimic human intelligence, and it's been around for almost as long as modern computing has. Some examples of AIs are opponents in video games, chatbots, and image recognition, because they all attempt to make decisions as a human would no matter how badly they might be implemented.

Computer engineering grad student, cybersecurity researcher, and hobbyist embedded systems developer

 

Daily Driver:

CPU: Ryzen 7 4800H | GPU: RTX 2060 | RAM: 16GB DDR4 3200MHz C16

 

Gaming PC:

CPU: Ryzen 5 5600X | GPU: EVGA RTX 2080Ti | RAM: 32GB DDR4 3200MHz C16

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Thomas A. Fine said:

When someone kills themselves because they thought they were talking to a real entity, it's not just semantics.  This is why I say the scary part is the scammy part.  The false belief in real general AI existing now is the part that can do the most immediate damage.

 

That's an edge case. I could find extreme examples of anything, on any topic anywhere. I don't think it's reasonable to use it as a meaningful data point at the moment. Otherwise 1 school shooter saying call of duty made him do it could make the entire planet ban videogames, and we all know that would be ridiculous.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, dcgreen2k said:

The part that I disagree with is what we call AI. AI is a very broad field that's based in creating things that mimic human intelligence, and it's been around for almost as long as modern computing has. Some examples of AIs are opponents in video games, chatbots, and image recognition, because they all attempt to make decisions as a human would no matter how badly they might be implemented.

Sure, but precisely because AI is an old field, with some AI algorithms having been in use for decades, that this leads to people in the field to call all kinds of things AI that have very little to do with mimicking intelligent decision-making.  This only increases the confusion.  The entire class of evolutionary algorithms came from the AI field and is associated with the AI field, but... come one... it's just better approaches to brute forcing solutions to complex multi-variable systems.

 

Professionals generally are very bad at distinguishing between their professional niche definitions, and the accepted public definition.  When we call every problem solving algorithms from the 70s "AI" because that's where they came from, it's guaranteed to confuse the general public.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Thomas A. Fine said:

But see, you're doing it just like everybody does.  See red highlights from your quotes.  It can NOT identify things.  It can NOT make decisions.  The software generates text according to a complex set of rules.  So if the chat lexicon includes other people identifying things, other people making decisions, the text generating software mimics those responses.....

I 100% Agree with what you have said.  This relates to something I have warned of and others have too.  People are all too quick to offload the mental work of using human discernment and judgement to AI systems such as Chat GPT.  It is not some super smart brain. 

Chat GPT is less smart than Brendan from the Game Cyberpunk 2077.  It's a text generator that can imitate and mimic the symbols we put in. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Thomas A. Fine said:

It can NOT identify things.  It can NOT make decisions.  The software generates text according to a complex set of rules

ChatGPT is not the only AI that exists.

And yes, the entire point of a neural network is to make weighted decisions based on inputs and outputs. That's why they're so good at what they do.

It isn't about ChatGPT taking over the world. It's about the maturity and application of neural networks and machine learning to do complex tasks.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Thomas A. Fine said:

Because there is no chat.  We're not chatting with anything.  We're generating random text and feeding our reactions back into the system, to generate more random text.

i don't know, that certainly sounds like a human conversation but in person we use random sounds instead of text.

 

So your whole issue with AI is that it's called "AI"?

It seems to all come down to your own definition of what "artificial intelligence" is, so lets see, what is intelligence to you

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

LLMs are the real deal. They just need to be thought of as though they were a naive, 5 year old version of Einstein that's focused more on spewing off something that he memorized and was told to talk about moreso than an infallible deity. 


Overall, I trust chatGPT to give me better responses than the median person I went to high school with. 

no seriously. I challenge you to ask someone you randomly meet what a Faustian bargain is or what Back Propagation is used for. 

 

The best part of these LLMs is you can ask them to recommend books to learn about the subject yourself. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Arika S said:

It seems to all come down to your own definition of what "artificial intelligence" is, so lets see, what is intelligence to you

This is a pretty important question, that no one seems to ask.  A true general human-like AI would exhibit the entire human cognitive system, and not merely mimic output behaviors.

 

It would have a stream of consciousness.  In simplest terms, that would probably be trivial to program, you just keep feeding chatGPT's output back in as input.  I'm not sure that counts.  (see below)

 

It would have a motivational system.  Wants, needs, desires, and emotional reactions designed to provide guidance.  It is less clear how to accomplish this.  It also seems like a really bad idea.  We typically engineer AI-based systems to accomplish specific tasks.  That's not done via a motivational system, though there is typically some sort of evaluation feedback.  But always much more hard-wired.

 

It should be able to solve new problems types that it has never been exposed to before, drawing on experiences from seemingly unrelated solution spaces.  This is even more vague.  We're farther away from something that can do this.  It is, in some sense, THE critical step in producing true general AI, regardless of whether we give them an independent motivational system or not.

 

And lastly, it should be self-aware.  That's not the same as having a stream of consciousness.  It should be aware that the stream of text (thought) it is generating is meaningful, and relates to who it is and why it exists, and things in the world that are real.

 

These last two steps involve real understanding of the knowledge we hold.  I've often held, and many AI experts agree, that a true human-like AI would need to have a body and senses, and experience the world in a way similar to what we experience, otherwise a lot of the words that it will learn will never be truly understood.

 

We have NOTHING approaching this.  We have cleverly engineered algorithms, created by humans, to achieve human goals.  There is nothing approaching real thought or real intelligence.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Thomas A. Fine said:

This is a pretty important question, that no one seems to ask.  A true general human-like AI would exhibit the entire human cognitive system, and not merely mimic output behaviors.

 

It would have a stream of consciousness.  In simplest terms, that would probably be trivial to program, you just keep feeding chatGPT's output back in as input.  I'm not sure that counts.  (see below)

 

It would have a motivational system.  Wants, needs, desires, and emotional reactions designed to provide guidance.  It is less clear how to accomplish this.  It also seems like a really bad idea.  We typically engineer AI-based systems to accomplish specific tasks.  That's not done via a motivational system, though there is typically some sort of evaluation feedback.  But always much more hard-wired.

 

It should be able to solve new problems types that it has never been exposed to before, drawing on experiences from seemingly unrelated solution spaces.  This is even more vague.  We're farther away from something that can do this.  It is, in some sense, THE critical step in producing true general AI, regardless of whether we give them an independent motivational system or not.

 

And lastly, it should be self-aware.  That's not the same as having a stream of consciousness.  It should be aware that the stream of text (thought) it is generating is meaningful, and relates to who it is and why it exists, and things in the world that are real.

 

These last two steps involve real understanding of the knowledge we hold.  I've often held, and many AI experts agree, that a true human-like AI would need to have a body and senses, and experience the world in a way similar to what we experience, otherwise a lot of the words that it will learn will never be truly understood.

 

We have NOTHING approaching this.  We have cleverly engineered algorithms, created by humans, to achieve human goals.  There is nothing approaching real thought or real intelligence.

You give a way too complex answer.

 

It's not AI because it can not come to conclusions by itself, not yet anyway. 

 

Example with ChatGPT: Asking it for a conversion factor between CV and KV values (flow parameters in valves, americans use CV most of the world uses KV values). ChatGPT answers readily with a conversion factor. Directly after you ask ChatGPT to derive the conversion factor between CV and KV values and it starts to derive and shows multiple steps and ends up with a different conversion factor that it happily presents. It fails to come to a conclusion both that one (or both) of the factors it presented is worng it even fails to identify that it has given two different answers to the same question, a question that is not ambiguous at that. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/13/2023 at 6:29 PM, cmndr said:

LLMs are the real deal. They just need to be thought of as though they were a naive, 5 year old version of Einstein that's focused more on spewing off something that he memorized and was told to talk about moreso than an infallible deity. 

 

 

Overall, I trust chatGPT to give me better responses than the median person I went to high school with. 

have you maybe considered talking to your peers because they are real people? All it does is give you what it thinks you want to hear it has no intentions, 5 year olds don't just spew off what they memorized they have feelings and empathy and are you know, people?

Link to comment
Share on other sites

Link to post
Share on other sites

there is a lot of issues around AI, about choice we dont fully need if set rules can teach it for the "best" outcome.

also we dont have full AI chips yet, nor do we have those brain chips either, but its still in progress and maybe much longer towards.

Do hope we get some fast AI chips soon, as with the mentions of energy/cooling cost for big AI data centers.

there has been some evolvement around AI chips (some a bit recent), but still in progress too.

 

also if an AI system had to take "choices", could conflict with the "best choice" from the data it collects, although feelings could be taken as data too, not so much for own choices generated but scenarios it creates and stuff for the current AI's. Simulate and "stimulate" the data, just like what I do with LTT underwear when sitting outside in the deep snow and taking hot chocolade.

Link to comment
Share on other sites

Link to post
Share on other sites

I find it cringey that they keep saying that it's overhyped, a scam, whatever, while they keep contributing to the hype at the same time.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/14/2023 at 4:43 AM, Thomas A. Fine said:

But in broad strokes, I agree that AI is a big scam, insofar as most people in the general public actually think that we now have machines that can think, and the actual AI practitioners know better, and some (many?) of them fail to make this point clearly enough, precisely because it brings more attention to them and their profession.

Ehh, I'm not so sure about that. Yes, the public often misunderstands. No, I don't think its the practitioners are failing to make the point clearly enough. That's the fault of the reporters who often fail to understand, or if they do understand they intentionally write clickbait nonsense. It isn't the fault of the practitioners who - if you deep dive and actually listen to the podcasts or read their blogs - are accurate and forthcoming about the limitations. It's the layman versions written by tech-reporters which are infuriatingly inaccurate. Remember that tech-reporters are (with rare exception) not experts and themselves don't understand what they're writing about.

 

Here's the thing, this isn't limited to AI. I've worked in the IT industry longer than some of the people here have been alive. I've seen so many IT breakthroughs that were misreported and subsequently misunderstood, I no longer care what the reporters say because in all likelihood they're wrong. You're better off going straight to the original source. With social media it's easier than ever to get the truth straight from the horses mouth.

 

The only situation where I've noticed actual falsehoods spread by the practitioners is in crypto currencies. I studied mathematics and engineering at university, now work in the engineering side of IT, but my passion was always mathematics, so I was naturally interested in the maths behind crypto currencies. I read more than most and even wrote a crypto wallet to understand it better. I'm not claiming to be an expert but I know more than most. I've seen so many lies from the crypto-practitioners, I concluded early on it's a scam. Either the practitioners don't know they're lying, or they do and lie anyway, either way I want nothing to do with crypto currencies.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, tuvok86 said:

I find it cringey that they keep saying that it's overhyped, a scam, whatever, while they keep contributing to the hype at the same time.

I just think there's multiple dimensions to it.  Anyone who claims that this is true intelligence is scamming.  Anyone who claims that that capability is right around the corner is scamming.  Anyone who says the robot takeover is about to happen is scamming.

 

But people who are simply saying this is massive technological shift in how things are done, are not.  There MIGHT be some degree of over hyping there.  We know there are deep flaws in these chat AIs.  It's less clear how much this will actually slow their acceptance.  I said above, that this will be different than self-driving cars, where people instinctively demand 99.999% success rate.  I suspect people will be willing to live with 80-90% success rate from chatGPT, because the vast majority of the time if it goes off the rails, it isn't killing someone.  Though clearly, there are corner cases where it is.

 

I think the best way to keep your eye on the ball is to avoid anthropomorphizing as much as possible, filter out anyone who is worried about what the AI will do, and focus on concerns about what humans will do with this human-created, human-engineered, human-managed, human-curated technology.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Thomas A. Fine said:

Anyone who says the robot takeover is about to happen is scamming.

I would argue that it has already happened. Currently, we do not have androids running around, but our lives are driven by machines that are doing tasks for us. They're everywhere. Cars, smartphones, PCs, etc. They all do the 'thinking' for us, regardless if they're actually thinking or not. They are removing humans from the equation. They are everywhere.

 

So the robot takeover has already happened, and is only going to ramp up as technology progresses.

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, nhand42 said:

Ehh, I'm not so sure about that. Yes, the public often misunderstands. No, I don't think its the practitioners are failing to make the point clearly enough. That's the fault of the reporters who often fail to understand, or if they do understand they intentionally write clickbait nonsense. It isn't the fault of the practitioners who - if you deep dive and actually listen to the podcasts or read their blogs - are accurate and forthcoming about the limitations. It's the layman versions written by tech-reporters which are infuriatingly inaccurate. Remember that tech-reporters are (with rare exception) not experts and themselves don't understand what they're writing about.

I don't entirely disagree with this, but I feel that blaming the journalists is incomplete.  (And I have no problem with blaming journalist — I think corporate takeovers of media have fundamentally changed reporting such that it is always about the biggest profit).  But clearly there are people who are not journalists, but who (in the eyes of the public) are still experts to be trusted.  Elon Musk is a perfect example.  But also Stephen Hawking.  When you get leaders of industry and science weighing in, badly, that's going to have a huge impact.  You can't really blame journalists from trusting people who (from their point of view) ought to know what they're talking about.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Godlygamer23 said:

I would argue that it has already happened. Currently, we do not have androids running around, but our lives are driven by machines that are doing tasks for us. They're everywhere. Cars, smartphones, PCs, etc. They all do the 'thinking' for us, regardless if they're actually thinking or not. They are removing humans from the equation. They are everywhere.

 

So the robot takeover has already happened, and is only going to ramp up as technology progresses.

Well, that's a bad argument.  When people talk about AI takeovers, they mean an artificial intelligence has seized control.


What you are talking about, in part, is a degree of corporate (i.e. human) control being exerted over humanity.  And, in part, you are downplaying how these technologies actually do give us more opportunities, rather than merely trapping us.  (In other words, both things can be true at the same time: it's a benefit, and it's a trap.)

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Thomas A. Fine said:

Well, that's a bad argument.  When people talk about AI takeovers, they mean an artificial intelligence has seized control.

You said robot takeover. I said nothing about AI. AI is used loosely today, and I'm not going to define what it means to be artificially intelligent because it's been tainted by the marketing departments across different companies, and would require another topic to define it. A machine can simply be executing instructions that take away the processing requirements from us, like ABS brakes. Not something we have to worry about because computers do it for us.

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×