Jump to content

The Human Brain Project Creates an AI That Runs On a Simulated Human Brain, Down to the Individual Neurons

4 minutes ago, My_Computer_Is_Trash said:

I'll make my previous response clearer.

 

What do you want me to say? Do you want me to say, "Yes this technology is here right now and we know exactly how the human brain works and we've already figured it out?"

Ok so a new technology is emerging and the idea of going down one path vs another has arrived and one person is debating the pros and cons of the two paths while the other is saying that they will figure out a way to make one path perfect and disregard all of the possible cons and then state see my path is the right choice. How do you talk about the pros and cons with such a person? I guess you don't? 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm always down for a good thought experiment, but I don't personally think that "programming emotions" will ever even be a thing anyways. I mean, what does that even mean? We can make it act like it's emotional. But true emotions? Particularly ones that could actually be useful in theoretically guiding it's 'ethics' and decisions? That's full on Sci-Fi (emphasis on the Fi) to me.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Brooksie359 said:

Ok so a new technology is emerging and the idea of going down one path vs another has arrived and one person is debating the pros and cons of the two paths while the other is saying that they will figure out a way to make one path perfect and disregard all of the possible cons and then state see my path is the right choice. How do you talk about the pros and cons with such a person? I guess you don't? 

If the one path can be made perfect, there are no cons. I am merely having a debate with you, and trying to take all of your cons into consideration. Afterwhich, explain why the cons that you mention are not valid. Or, if you have a concern that is valid, I will consider those cons so that the method of giving AI emotions doesn't end up destroying the world. You're right. It's pretty dangerous. But that's exactly why I'm considering all of possibilities, or trying to give you an understanding of why a concern of your's isn't going to happen.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Holmes108 said:

I'm always down for a good thought experiment, but I don't personally think that "programming emotions" will ever even be a thing anyways. I mean, what does that even mean? We can make it act like it's emotional. But true emotions? Particularly ones that could actually be useful in theoretically guiding it's 'ethics' and decisions? That's full on Sci-Fi (emphasis on the Fi) to me.

Emotions are merely chemicals. Are chemicals emotions? If so, what's the difference between our brain's neurons being affected by chemicals, and an computer's transistors emulating emotion. This entire post is considering the definition of humanity, the definition of what can be considered alive or not, truly intelligent or not, and many other questions that thinkers around the world have been trying to figure out for years.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, My_Computer_Is_Trash said:

If the one path can be made perfect, there are no cons. I am merely having a debate with you, and trying to take all of your cons into consideration. Afterwhich, explain why the cons that you mention are not valid. Or, if you have a concern that is valid, I will consider those cons so that the method of giving AI emotions doesn't end up destroying the world. You're right. It's pretty dangerous. But that's exactly why I'm considering all of possibilities, or trying to give you an understanding of why a concern of your's isn't going to happen.

The only understanding I am getting is you are completely ignoring the concerns understanding the false premise of we will figure it out.  Not trying to be mean but when I say emotions could cause the AI to lash out like alot of emotional humans do you simply say we will figure out how to make sure that doesn't happen or that empathy will make sure nothing bad happens because famously nobody has ever done anything bad to someone because we have empathy. I'm sorry but you give 0 explanations that are actual explanations.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Brooksie359 said:

The only understanding I am getting is you are completely ignoring the concerns understanding the false premise of we will figure it out.  Not trying to be mean but when I say emotions could cause the AI to lash out like alot of emotional humans do you simply say we will figure out how to make sure that doesn't happen or that empathy will make sure nothing bad happens because famously nobody has ever done anything bad to someone because we have empathy. I'm sorry but you give 0 explanations that are actual explanations.

I'm just saying, testing is where you work out problems. And a product that can wipe out the world, will go through testing, and be without problems, before it releases.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, My_Computer_Is_Trash said:

I'm just saying, testing is where you work out problems. And a product that can wipe out the world, will go through testing, and be without problems, before it releases.

Yes because testing has historically always caught all defects especially when it comes to programming. Never has a program been released that had bugs because we test then before releasing them so how could they possibly have bugs we weren't aware of. Again with the we will figure it out and it will be perfect narrative with 0 evidence to support this claim. 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Brooksie359 said:

Yes because testing has historically always caught all defects especially when it comes to programming. Never has a program been released that had bugs because we test then before releasing them so how could they possibly have bugs we weren't aware of. Again with the we will figure it out and it will be perfect narrative with 0 evidence to support this claim. 

How would you like me to support this "claim"? It is merely a theory. There is no claim.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, My_Computer_Is_Trash said:

How would you like me to support this "claim"? It is merely a theory. There is no claim.

You can't support your claim because it's an absurd assumption. It's why I said you are being unrealistic. Have you ever thought that maybe your theory is simply wrong?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Brooksie359 said:

You can't support your claim because it's an absurd assumption. It's why I said you are being unrealistic. Have you ever thought that maybe your theory is simply wrong?

Why would that be?

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, My_Computer_Is_Trash said:

Why would that be?

Have you looked at history and even today and seen a program that was tested enough to ensure there were no bugs? I haven't so for you to say that they would figure out a way to make sure there were no bugs or issues without any evidence as to how they would do that I say you are wrong. You have zero evidence to support this being possible and I have history and today's programs to support the idea that no matter how hard you try programs are going to inevitably have bugs that you won't catch in testing especially the more complicated the program. An AI with emotions would undoubtedly be very complicated and to say that we would make it so it has no bugs with no real idea how we would accomplish this is just spouting nonsense. 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Brooksie359 said:

Have you looked at history and even today and seen a program that was tested enough to ensure there were no bugs? I haven't so for you to say that they would figure out a way to make sure there were no bugs or issues without any evidence as to how they would do that I say you are wrong. You have zero evidence to support this being possible and I have history and today's programs to support the idea that no matter how hard you try programs are going to inevitably have bugs that you won't catch in testing especially the more complicated the program. An AI with emotions would undoubtedly be very complicated and to say that we would make it so it has no bugs with no real idea how we would accomplish this is just spouting nonsense. 

And you have zero evidence that it's impossible.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, My_Computer_Is_Trash said:

And you have zero evidence that it's impossible.

Yes zero evidence other than current programs having bugs despite vigorous testing and the fact that historically we haven't had the perfect program that never has any bugs especially complicated programs. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Brooksie359 said:

Yes zero evidence other than current programs having bugs despite vigorous testing and the fact that historically we haven't had the perfect program that never has any bugs especially complicated programs. 

I'm just saying that giving AI emotions could go either way. But even so, an AI can't destroy the world if it literally doesn't have the physical power to, and it has the brain power of a human. (Which is in the reply inside of my initial post) I trust that these precautions to not have bugs in the first place and then precautions to stop those bugs more than I trust an AI without emotions to accidentally take something literally and end up going haywire.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Brooksie359 said:

Yes zero evidence other than current programs having bugs despite vigorous testing and the fact that historically we haven't had the perfect program that never has any bugs especially complicated programs. 

I misunderstood. And I do not mean to cause any conflict. I understand where you are coming from. And I share the same concern. But that's why I came up with this idea in the first place. (I am not planning to do anything with it btw)   

 

I think both of us should write a pros, and cons list. As well as possible situations where bugs occur, to be able to make sure that the fail safes that I've put in place account for those situations.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just to clarify to those who didn't read the article, this is simulating 15 neurons in the hippocampus, not the whole brain. We're likely years, if not decades (or longer) away from even attempting to simulate an entire brain.

 

Just for reference, the real human brain as 100 billion neurons. They hippocampus alone may have upwards of 10 billion neurons.

 

As for the discussion, regarding adding emotions to an AI, I think that's an incredibly bad idea in general unless those emotions are heavily regulated and controlled. But, how would we do that? How do you make an AI "less angry"? I certainly don't think an AI with emotions would be easier to predict, because one of the most predictable things about humans is that they can be unpredictable.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, dalekphalm said:

As for the discussion, regarding adding emotions to an AI, I think that's an incredibly bad idea in general unless those emotions are heavily regulated and controlled. But, how would we do that? How do you make an AI "less angry"? I certainly don't think an AI with emotions would be easier to predict, because one of the most predictable things about humans is that they can be unpredictable.

Some could be good around "morals", or what is "good" vs "bad". For crimes or actions done, and feel how it would put weight onto something. But it might just be easier to let humans do that job than some system, also if anything goes wrong or something. So it can stay more on the details and "facts" than some sort of emotion.

Edited by Quackers101
Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Quackers101 said:

Some could be good around "morals", or what is "good" vs "bad". For crimes or actions done, and feel how it would put weight onto something. But it might just be easier to let humans do that job than some system, also if anything goes wrong or something. So it can stay more on the details and "facts" than some sort of emotion.

The other question is how do you even regulate the emotions of an AI? Can we even give a single emotion to an AI? I have a suspicion that if we are able to pass emulating emotion to actually creating real emotion in an AI, it will be a emergent type situation where all possible emotions are created.

 

We may not even have the ability to curtail specific ones. It might be an "all or nothing" kind of situation.

 

I also feel like we can probably teach an AI morals without it having real emotions. We can teach them why certain things are okay and not okay, and why using logic it's not good to do bad things, etc.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/2/2023 at 5:45 PM, dalekphalm said:

The other question is how do you even regulate the emotions of an AI? Can we even give a single emotion to an AI? I have a suspicion that if we are able to pass emulating emotion to actually creating real emotion in an AI, it will be a emergent type situation where all possible emotions are created.

"psycho pass", multiple brains connected to a network collected and managed by AI. 100% controlled by the gov, 100% nothing goes wrong.

On 5/2/2023 at 5:45 PM, dalekphalm said:

I also feel like we can probably teach an AI morals without it having real emotions. We can teach them why certain things are okay and not okay, and why using logic it's not good to do bad things, etc.

I think we can "train it" just from a case to case basis. just a bit like what lawyers do? ofc it cant do everything and sometimes there need some people to adjust it. but maybe it is able to put some value to things we cant.

Edited by Quackers101
Link to comment
Share on other sites

Link to post
Share on other sites

On 5/2/2023 at 11:45 AM, dalekphalm said:

The other question is how do you even regulate the emotions of an AI? Can we even give a single emotion to an AI? I have a suspicion that if we are able to pass emulating emotion to actually creating real emotion in an AI, it will be a emergent type situation where all possible emotions are created.

 

We may not even have the ability to curtail specific ones. It might be an "all or nothing" kind of situation.

 

I also feel like we can probably teach an AI morals without it having real emotions. We can teach them why certain things are okay and not okay, and why using logic it's not good to do bad things, etc.

 

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/4/2023 at 11:14 AM, My_Computer_Is_Trash said:

 

Why are you linking your other post?

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/5/2023 at 5:19 PM, dalekphalm said:

Why are you linking your other post?

Because my response to that post gives my opinion on how we should arrange the hardware in an AI. I think it may answer a couple of questions you have.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/21/2023 at 7:28 PM, My_Computer_Is_Trash said:

In order to properly create safe and real AI, you need to make it think like a human.

Can't be done with CNNs.

On 4/21/2023 at 7:28 PM, My_Computer_Is_Trash said:

However, if you can simulate the hippocampus, you can simulate other parts of the brain, right?

Depends on what you mean by "simulate", because just being designed to kind of look like a brain isn't enough. As far as I know we don't actually know how the human brain works, computer neural networks are just an abstraction that kind of looks like a neuron if you squint. This project seems to take a more dynamic approach that could help in learning stuff like paths across a room, I'd still say it's quite far from actually resembling human thought.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Yes as others have pointed out simulating a full human brain with billions of neurons is a long way off.  By the time we are doing that the purpose of this would be IMHO something like simulating specific humans brains, hence consciousness.  A simulation of yourself that could interact with people and the world after you are gone... you but also not you.  Another option for this kind of technology would be as a way to keep the wet ware of the human brain viable and alive for longer.  If we can simulate a human brain, we can understand it's functions and keep those functions going longer.  
 

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, Uttamattamakin said:

A simulation of yourself that could interact with people and the world after you are gone... you but also not you.

There's literally a black mirror episode about this 😛

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×