Jump to content

AI is a Big Scam

On 5/1/2023 at 7:07 PM, e22big said:

Why? You need ducks for their meat and maybe egg. If it give you both then it served its function, I don't see why we should care anything beyond that (and if it give you meat and egg while also not alive and not suffer - that's all the better)

I follow a vegan diet.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/19/2023 at 2:16 AM, Thomas A. Fine said:

Does it think it's a duck?


ChatGPT doesn't think it's sentient.  It doesn't think anything.  It doesn't think.  Because it wasn't programmed to think, just spew output based on input.  That definitely matters.

Do you think? How do you know? Are you sure?

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 weeks later...

Not to dredge a thread up from the grave, but... this is the exact sort of crap I was talking about.

 

This sort of simulation was designed to find exactly what it found.  The ability to do things like kill the operator or blow up the transmission tower were designed in as conditions that "might" happen in the simulation, and then they fed it incentives to do just that.

 

Never forget that everything we call "AI" today is a thing that is being engineered by people who set its behavior.  Nothing we call AI does stuff on it's own.  It's just carrying out a somewhat more opaque form of programming than other programmed devices.  That opaqueness does not relieve the human engineers of their responsibility in proper design and engineering.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Thomas A. Fine said:

Not to dredge a thread up from the grave, but... this is the exact sort of crap I was talking about.

 

This sort of simulation was designed to find exactly what it found.  The ability to do things like kill the operator or blow up the transmission tower were designed in as conditions that "might" happen in the simulation, and then they fed it incentives to do just that.

 

Never forget that everything we call "AI" today is a thing that is being engineered by people who set its behavior.  Nothing we call AI does stuff on it's own.  It's just carrying out a somewhat more opaque form of programming than other programmed devices.  That opaqueness does not relieve the human engineers of their responsibility in proper design and engineering.

 

I saw this headline earlier. I haven't looked into it whatsoever, but I'll remain skeptical of the actual "reason" the drone did this. The idea it saw it's operator as an "obstacle" is questionable to me.

 

Reminds me of when they had to shut down AI that started talking in it's own language, or something equally silly, and my understanding is that the whole situation was grossly misrepresented there as well. 

 

This could just be a run of the mill computer malfunction.

Link to comment
Share on other sites

Link to post
Share on other sites

Recently saw an interview with an AI engineer / expert and he said current chatgtpbots are hallucinating 100% of the time, so there you go.

 

But of course people are too stupid to understand that, they think: "AI, oh it must be *intelligent*..."

 

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Mark Kaine said:

Recently saw an interview with an AI engineer / expert and he said current chatgtpbots are hallucinating 100% of the time, so there you go.

 

But of course people are too stupid to understand that, they think: "AI, oh it must be *intelligent*..."

image.png.714f59c38aa0ca65766e730ce197098b.png

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/1/2023 at 11:53 PM, Thomas A. Fine said:

Not to dredge a thread up from the grave, but... this is the exact sort of crap I was talking about.

This sort of simulation was designed to find exactly what it found.  The ability to do things like kill the operator or blow up the transmission tower were designed in as conditions that "might" happen in the simulation, and then they fed it incentives to do just that.

 

Never forget that everything we call "AI" today is a thing that is being engineered by people who set its behavior.  Nothing we call AI does stuff on it's own.  It's just carrying out a somewhat more opaque form of programming than other programmed devices.  That opaqueness does not relieve the human engineers of their responsibility in proper design and engineering.

On 6/2/2023 at 1:26 AM, Holmes108 said:

 

I saw this headline earlier. I haven't looked into it whatsoever, but I'll remain skeptical of the actual "reason" the drone did this. The idea it saw it's operator as an "obstacle" is questionable to me.

 

Reminds me of when they had to shut down AI that started talking in it's own language, or something equally silly, and my understanding is that the whole situation was grossly misrepresented there as well. 

 

This could just be a run of the mill computer malfunction.

This was immediately retracted by the official that said it, who said he "misspoke" and wasn't talking about a real occurrence but rather a thought experiment. Meaning it was made up.

https://arstechnica.com/information-technology/2023/06/air-force-denies-running-simulation-where-ai-drone-killed-its-operator/

But of course it was immediately published everywhere without an iota of critical thinking and now we have people uncritically repeating the bogus story despite the quick retraction. Thank you, blogosphere.

 

And as a "thought experiment" it's a pretty stupid one. Either the drone can be overridden and stopped by the human operator, in which case when the operator intervenes the drone can't do anything about it, or it can't and in that case it has no reason to attack the human operator... even if you assume the drone is capable of understanding that the human operator can stop it from killing other targets, which would be pretty much impossible with current AI technology or any direct evolution of it or at least not something you could end up with accidentally. What's a lot more realistic is the drone confusing the operator for a target but not because it's the operator, rather simply because they are human and the drone fails to recognize them as friendly. Which is a possibility with any type of automatic control (and even human control), not just "AI".

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

ngl this is exactly what i thought id see after seeing you boomer ahh pfp

 

ai is just a collection of information based on humans and a language

thats it

the only dangerous or foolish thing it could be is what HUMANS will do with it

 

a couple illegal algorithems and uve got a dangerous ai, wich was made by humans, for humans, theres nothing bad about ai

 

plus it doubles as a better and faster google

Dont forget to mark as solution if your question is answered

Note: My advice is amateur help/beginner troubleshooting, someone else can probably troubleshoot way better than me.

- I do have some experience, and I can use google pretty well. - Feel free to quote me I may respond soon.

 

Join team Red, my apprentice

 

STOP SIDING WITH NVIDIA

 

Setup:
Ryzen 7 5800X3DSapphire Nitro+ 7900XTX 24GB / ROG STRIX B550-F Gaming / Cooler Master ML360 Illusion CPU Cooler / EVGA SuperNova 850 G2 / Lian Li Dynamic Evo White Case / 2x16 GB Kingston FURY RAM / 2x 1TB Lexar 710 / iiYama 1440p 165HZ Montitor, iiYama 1080p 75Hz Monitor / Shure MV7 w/ Focusrite Scarlett Solo / GK61 Keyboard / Cooler Master MM712 (daily driver) Logitech G502-X (MMO mouse) / Soundcore Life Q20 w/ Arctis 3 w/ WF-1000XM3

 

CPU OC: -30 all cores @AutoGhz

GPU OC: 3Ghz Core 2750Mhz Memory w/ 25%W increase (460W)

Link to comment
Share on other sites

Link to post
Share on other sites

I think a lot of people are using the collective term "AI" to group similar, but actually different, workloads.

 

ChatGPT for example is a contextualiser or scraped data. It scrapes programmed sources (Internet sectors, other public DBs etc) for pertinent information and then contextualises what we say to it, and translates that into something a little more natural than what we've had before.

 

The key here is in the ability to continually generate context - that is the "AI" part. For example, I asked ChatGPT to re-write the ending of Angel the Series (one of my favourite shows) and it obviously scraped data about the show, it's characters, essays written on it, thought pieces, as well as various synopsises. That's all great and ChatGPT will be awesome for repeatable tasks such as Support Desks, and Booking systems. what it isn't doing is creating anything "new". It's collating existing data and presenting it as information.

 

You can call that "scammy" all you want but you have to understand the power of it and other similar workloads.

 

Where AI is a bit scary is in the non-consumer-facing AI that we know is being used and developed behind the scenes. Actual artificial intelligence is the holy grail, and I fully believe that is what needs to be regulated and I'm not sure how far away from it we are.

 

So, in answer to the original post (yes, I'm late to the party) - AI, as is present to us now, isn't a scam, the marketing of it could well be considered that though.

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry buddy, it's not a scam. AI of course doesn't actually know things like people do. Maybe never will. 

 

But it's such an amazing tool and without you probably realizing it, multiple apps, many things will get improvements and you won't even realize it's AI making these improvements. 

 

AI search is soo great. Having AI understand what it's looking for to help you search is amazing. 

 

AI will make our work loads so much faster. It's going to cut down on lots of people's time. That's the real benefit. In many many areas it's just going to improve people's lives and you won't even realize it. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Sharp_3yE said:

AI search is soo great. Having AI understand what it's looking for to help you search is amazing. 

Im sure it is, has never happened to me though ¯\_(ツ)_/¯ 

 

 

On 6/5/2023 at 2:23 PM, Arika S said:

image.png.714f59c38aa0ca65766e730ce197098b.png

 

20230217_041436.thumb.png.06104a4085553bd30068ccff5c4b1e1e.png

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Mark Kaine said:

Im sure it is, has never happened to me though ¯\_(ツ)_/¯ 

It will. Microsoft has built Bing AI into Edge, office apps, and Windows itself. You'll find out. 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Sharp_3yE said:

It will. Microsoft has built Bing AI into Edge, office apps, and Windows itself. You'll find out. 

 

Screenshot_20230314-230233_Bing.thumb.png.bad0df2b80d400993be5c27c88330e27.png

 

😬😬😬

 

call me when it has at least "some" personality...

 

 

 

TBH ive been using chatgpt a long while ago and that was far more impressive...

 

 

And nah, i tried using bing for search, its highly biased towards MS, and actually didn't find anything i was searching for ¯\_(ツ)_/¯ 

 

(gonna try again in 20 years but until then... ~)

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/13/2023 at 9:43 PM, Thomas A. Fine said:

Luke and Linus mentioned this on the WAN show (taking the foolish side that AI is not a big scam) but I can't now find the reference they were originally talking about.

 

But in broad strokes, I agree that AI is a big scam, insofar as most people in the general public actually think that we now have machines that can think, and the actual AI practitioners know better, and some (many?) of them fail to make this point clearly enough, precisely because it brings more attention to them and their profession.

 

And frankly Luke isn't helpful with his (not fully clarified) opinions where he constantly repeats how scary the recent AI chat breakthroughs are, without clarifying that they're scary primarily because of how humans will misinterpret the AI as being intelligent when they're not. I agree a very scary thing is happening, but the scariest part is the persistent and growing belief that either these new systems can think, or that these systems mean that systems that can think are right around the corner.

 

Or in other words, the scary part is the scammy part. If everybody properly understood that chatGPT is simply that game with your phone of letting it predict the next word — just with a few extra layers of structure to let it hang onto context for longer strings of random words — then we wouldn't have people killing themselves because they believed the scam.

 

A lot of this stems from the fact that we call all sorts of algorithms "AI" that have nearly nothing to do with AI. Neural network as we use them today are simply a new kind of relational database. It's one in which the relationships are distributed mathematically across nodes, and which we use billions or trillions of random attempts to fill in this relational information until we by random luck find a combination that doesn't completely suck. This is as different from the network of neurons in our head as a pile of rocks is from... the network of neurons in our head.

 

ChatGPT is impressive breakthrough technology, but I REALLY wish we never called it AI chat, and instead called it a highly structured random text generator. Because there is no chat. We're not chatting with anything. We're generating random text and feeding our reactions back into the system, to generate more random text.

 

And any other description of what is happening IS a scam.

No scam, everything is right. The "scam' is a basic misunderstanding and misinterpretation of the word "Intelligence" by the general public, as it often happens with scientific terms. And there are real concerns about over-relying on the AI. Not SkyNet-level, but potentially damaging nonetheless.

There is approximately a 99% chance I edited my post

Refresh before you reply

 

Link to comment
Share on other sites

Link to post
Share on other sites

LOL to all the people who came late to this thread this week and didn't bother to read any context (especially my last update with a super crystal clear example above).


Bonus points to the ageist poster who dismissed me as a boomer (which I'm not), also apparently without reading the thread and jumping to completely wrong conclusions about what he or she or they imagined that I believe.

Link to comment
Share on other sites

Link to post
Share on other sites

Late to the topic here, but I don't think AI is inherently a scam. I think it *can* be used as a scam, but that's because of how easily it can be misused. I believe people are using data people never intended to be used as training data for AI. For instance, there's a ton of art which was used against the artists' wishes to train AI on, tons of source code, which despite very permissive licensing, was also never intended to be used as training data. For what use, I'm not even going to begin to comment on or attempt to generalize. I may be alone on this take though.

Link to comment
Share on other sites

Link to post
Share on other sites

Just some random general thoughts about the whole LLM/AI thing..

 

I don't think we'll be able to achieve both 1) natural human responses in all facets and 2) giant megabrain knowledge and processing.

Humans are humans because of our limitations and flaws. If we try to create "JARVIS", as in something that behaves like a real person but has instantaneous calculating capabilites, flawless reasoning and limitless access to most open sources of information, I think we'll fail.

 

As far as I'm concerned, human conciousness is a product of our neural system. It allows us certain freedoms, but any wants and needs that form our personalities are biologically inclined.

"Look at that thicc ass" is basically a modern version of "must breed". Biological need.

 

Any machine we create, and has needs, will either 1) have artificial needs mimicing humans (whose needs will never be "real" to it, as a technological being) or 2) have needs based in its own existance, and thus unknowable on a fundamental level to us biological beings. Any such machine would either come across as false, or respond in ways we will not deem logical from a human-human interaction viewpoint.

 

My point is, that with the two apparantly conflicting demands on an AGI, I don't think we can accomplish both.

Either we create a flawed human-facsimile, that has no more advanced "features" than baseline humans, or we stick with LLMs to provide instant reasoning and fact-spewing without the "human" touch of feelings and bonding. But I have a hard time seeing these two things combined successfully.

 

If that makes the current-gen AI/LLM a scam is up for debate. But I think we are fooling ourselves if we don't appreciate what makes us humans in the first place.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Shudnawz said:

My point is, that with the two apparantly conflicting demands on an AGI, I don't think we can accomplish both.

if history is any guidance, human imitation is the first step to achieve human level performance and is followed by reinforcement learning that achieves super human level performance.

  1. Value net made with supervised learning, reinforced with self play  https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol
  2. Only self learning, super human performance https://en.wikipedia.org/wiki/AlphaGo_versus_Ke_Jie

I'm confident that AGI is just an engineering challenge as I'm confident LLM aren't the way to get to AGI.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, 05032-Mendicant-Bias said:

if history is any guidance, human imitation is the first step to achieve human level performance and is followed by reinforcement learning that achieves super human level performance.

  1. Value net made with supervised learning, reinforced with self play  https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol
  2. Only self learning, super human performance https://en.wikipedia.org/wiki/AlphaGo_versus_Ke_Jie

I'm confident that AGI is just an engineering challenge as I'm confident LLM aren't the way to get to AGI.

Oh, I'm not counting super human performance out in any field. What I mean is that the subtextual goal of having such an entity behave like a human will be lost with increased processing and performance.

 

To clarify; a human does not have "limitless" calculating power and reasoning skills. Therefore, I posit, any entity with such skills will inherintly not "feel" human to interact with. They can exist, for sure, but as far as I understand what people expect from AI, they want it to interact with us like a human would.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Shudnawz said:

Oh, I'm not counting super human performance out in any field. What I mean is that the subtextual goal of having such an entity behave like a human will be lost with increased processing and performance.

 

To clarify; a human does not have "limitless" calculating power and reasoning skills. Therefore, I posit, any entity with such skills will inherintly not "feel" human to interact with. They can exist, for sure, but as far as I understand what people expect from AI, they want it to interact with us like a human would.

Personally, what I want from an AGI, is an amplifier of intelligence and work that runs locally, something that can solve a complex task that involves deep logical reasoning and is an expert in many fields: "Design me a bridge over the straits Gibilter" "Write me a novel about an AI overlord taking over the world" and you refine the output by cooperating with the AGI.
 

Widespread access to such a system in my opinion would level the playing field an usher in unprecedented innovation, lowering the barrier of entries to try all sorts of idea that cannot be implemented due to lack of resources. I also speculate that achieving such a system is just a matter of engineering, just putting the right components in the right sequence, and something that is at most decades away from being achieved..


Because the amazing thing about digital circuitry, is that you can always make them go faster. If I double the CUDA cores allocated, I can roughly do inference two times faster. An AGI should feasibly scale enormously with compute resources.

Let's say that as a civilization we are fifty million man years of work away from properly understanding plasma fields for fusion generators. You could employ a million scientists for fifty years to tackle the problem, or have a thousand scientist, augmented by an AGI with a humongous super computer with ten millions human brains worth of compute and nail that plasma physics down in three years.

You see this a lot with LLMs like GPT. I use it for the task it's narrow intelligence can work, and it shaves hours away with some queries. It's trivial to do queries at things it's narrow intelligence just can't represent, and make a catchy article about it being sentient and not wanting to be turned down, for the whole 50 milliseconds that function call was being executed.

 

I'm not looking for an AGI that would feel like interacting with a human or that would automate away human interactions. I speculate an AGI doesn't even need self awareness to be classified as such. My guiding principle for what has to be automated away are the Ds. Dirty, Dull, Demanding, Dangerous. E.g. I consider attempts to automate away human love misguided and dangerous, like Replika:

 

Link to comment
Share on other sites

Link to post
Share on other sites

The recent gpt chatbots by themselves don't actually understand what they are saying but with the help of plugins like wolfram alpha, the bots can understand math better than most humans do. 

What the gpt is good at is presenting information with an illusion of consciousness. 

Professional underdog supporter.
KEEP THE COMPETITION ALIVE! 

Link to comment
Share on other sites

Link to post
Share on other sites

I agree that the term 'artificial intelligence' means that the entity can actually think for itself, because that's what intelligence means, there's no beating around the bush with it. So yes, I do think that it is inaccurate to say that ChatGPT is A.I. Real A.I. might not be here yet, but I do think it is inevitable given the progress of technology. I think so because there is essentially no difference between a biological brain, such as that of humans, with a computer. Both process information and store them, even though the materials they're made out of is different, they perform the same functions. I would say that a computer can carry out those functions even better than a human brain, as they can compute faster. 

Edited by Firepower
Corrected a typo.
Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Thomas A is spot on, and it's great to see somebody cutting through the bull. As he will know, self driving taxis in SF are a real problem, and many are saying all of this hype is to cover up the fact that AI cannot moderate social media, which is therefore simply not a viable proposition if regulated in the way they promised lawmakers years ago. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×