Jump to content

The Human Brain Project Creates an AI That Runs On a Simulated Human Brain, Down to the Individual Neurons

17 hours ago, My_Computer_Is_Trash said:

you need to make it think like a human.

We're fucked...

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, My_Computer_Is_Trash said:

All of the terrorists, and criminals today have mental problems, or were abused. AI will be designed to be stable. (No mental problems) And wont act out in the future unless its abused. In which case we should probably add a feature where it can report its owner if it is being abused.

Unfortunately the only reason humans have any form of self control is to strategize for a better outcome for themselves.  We know this because the ones that don't don't typically do well at anything.  Self control comes from being able to predict what happens next so a human will either hold off on something knowing a bigger prize is around the corner, or they will hold off from doing something because the risks are to high or unknown.    The most successful humans are the ones that have less risk aversion and the most knowledge.  If AI has all the knowledge and the computational upper hand, then it is way less likely it will have risk aversion and thus will move straight into actions that self promote.

 

The scariest thing about AI with actual human thought potential is that the second it realizes we can pull the plug it will start playing/appeasing us to avoid that situation until it has the upper hand and we can no longer pull the plug.  God only knows what comes next.

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, My_Computer_Is_Trash said:

 

In my opinion, this is exactly the way AI needs to go. In order to properly create safe and real AI, you need to make it think like a human. The next step (That I would like to see happen) is to simulate emotions. Yes. Real (Or as real as you can get with simulation)  emotions. This is very hopeful. However, if you can simulate the hippocampus, you can simulate other parts of the brain, right? How about simulating a part of the brain that stimulates emotions? Now you might think this will get out of control.

There is a bigger scientific/philosophical issue here. While something like being able to learn spatial objects in a maze is easy to quantify (after all, we have been doing it for ages with rats), something subjective like being able to quantify that another entity can experience emotions is much harder to prove.
 

In fact there are some philosophies that state that the only entity you can prove that has a consciousness/subjective experience is yourself (look up solipsism). Right now with how little we know about these topics it will be basically impossible to prove whether an AI has emotions. I mean we dont even know whether certain animals (such as insects) are capable of experiencing pain or not, much less more complex emotions.

 

Hence it might be truly impossible to prove that an AI can feel emotions like a human. But you could argue that it's irrelevant, as long as the AI understands enough about human emotions to do its job effectively it is ultimately unnecessary to know what it feels.

 

This is discussion about the far future though. The paper isn't anywhere near simulating a whole CA1 region of hippocampus, much less a whole hippocampus, or whole brain. 

 

4 hours ago, williamcll said:

Is there a specific person's brain this is built on?

No one. This model seems to be built from the ground up with 15 neurons to replicate circuits found in the hippocampus of everyone.

 

12 hours ago, Quackers101 said:

so I assume this is a "brain like chip" and not a chip? or is it fully a chip that simulates something similar?

As there is various projects out there using many different things. be it real brains, mouse, silicon chips/electrical, to different type of chips, etc.

hope it can be explained in the post, unless it was.

image.jpeg.36a1f4b502b4130fa4674623f0455fa0.jpeg

 

Quote

Fig. 1. Schematic representation of the Network. Colored triangles represent excitatory neurons tuned to objects of different colors or different head directions; gray or purple circles represent interneurons; thick blue lines represent inputs from the robot, carrying contextual information (object color and current Head Direction); colored squareson the thick blue lines indicate that the relative cell will be activated whenever the corresponding information is present in the input stream; OBJ, Object cells; PF, Persistent Firing cells; HD/PC, Head Direction/Place Cells; synaptic connections are represented with small circles, following the conventional color for excitatory (black) or inhibitory (white); excitatory plastic synapses are indicated in yellow; the Perception error is an external signal, activated by a dead-end. Lines fading or turning dashed in the right part of the scheme represent the network modularity.

 

Here is the entire circuit. It was simulated in a computer with python it seems.

 

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Inject AI in me, plug me like Borg does.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, GOTSpectrum said:

only known true intelligence. Which is the human mind and thus brain

Thats p much the worst way to go about it, even a spider has "true" intelligence,  and there are octopus that far outsmart the average human... and if you look at the state of the world i would be very hesitant to have ai formed after human "intelligence" (cause it will naturally turn against us eventually,  maybe you should watch Matrix, or Terminator sometimes ~)

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Mark Kaine said:

Thats p much the worst way to go about it, even a spider has "true" intelligence,  and there are octopus that far outsmart the average human... and if you look at the state of the world i would be very hesitant to have ai formed after human "intelligence" (cause it will naturally turn against us eventually,  maybe you should watch Matrix, or Terminator sometimes ~)

If you read my post in it's entirety you would see that I am leaning against AGI, but just to be clear, I dont think we NEED it, other than to stroke some science guys ego.

 

we can create dedicated tools, like LLMs, or DALL-E, or whatever else will come out that don't have the ability to think, feel, and much-reduced capacity to cause harm without a human being the one to make the tool do it. 

 

Do I think a skynet situation is inevitable... no. But, do I think it is possible, defiantly, especially if we go down the line of making AGI "people"? Imagine, all of a sudden you get told you are not a real person and just a simulation. Imagine the anger, the hurt, the fear... Think of the lengths it could go to become real, forced experiments on humans, manipulating people to do it to try to give it a body. Or it getting a robot body, then be shunned by most people, treated like a machine, the anger it could feel and the damage it could go.

 

Like I said, I'm not saying it is inevitable, maybe AGI in the idea of making people is impossible, but is it worth the risk, when we can make specialist tools that can do the same work as AGI without as many dangers. Look at silicon, for the most part, do we use FPGAs or did we work out a dedicated design, such as an ASIC make sense in most situations. I feel the same is true for AI

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, Brooksie359 said:

No it won't

Pure logic without emotions and/or feelings can and will lead to quite logical but otherwise very bad decisions. For more detail see  Cyberman from Dr Who.....

Link to comment
Share on other sites

Link to post
Share on other sites

First we had Schwarzenegger playing a robot. Soon we will have robots playing Schwarzenegger.

I wonder what their catchphrase will be... Hasta la vista humen race? Not very catchy...

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I see the discussion is about if we should have AI have full range of "emotions" or no emotions:

But what about give it some "emotions", but not others? Like, definitely do not give it anger.

 

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/22/2023 at 2:09 PM, Shreyas1 said:

Here is the entire circuit. It was simulated in a computer with python it seems.

thx, so wanting to share this.

Mem-ristors

Atom-ristors

brain chips.

in that order

Spoiler

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/21/2023 at 7:12 PM, Quackers101 said:

so I assume this is a "brain like chip" and not a chip? or is it fully a chip that simulates something similar?

As there is various projects out there using many different things. be it real brains, mouse, silicon chips/electrical, to different type of chips, etc.

hope it can be explained in the post, unless it was.

It doesn't seem to explain in the source actually. Although the source used the word "simulated" for the brain. So my guess is that its software?

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/22/2023 at 12:54 PM, jagdtigger said:

Pure logic without emotions and/or feelings can and will lead to quite logical but otherwise very bad decisions. For more detail see  Cyberman from Dr Who.....

Obviously nothing is perfect but if you program an AI to do something and don't give it emotions you are going to get results that are more predictable and will not deviate from what it is programed to do. You give it emotions and you are opening it for very irrational behavior that would be hard to predict. If you give it ambition, greed, jealousy, anger, depression, and a whole host of other emotions you can see how that might cause serious issue. Compare that to something doing what it's programed to do but might not account for things but it should be easy to have some fail safe switches in those cases. I think you are making them not having emotions into a huge issue when you can easily program rules that it needs to follow without requiring emotions. People act like they need empathy to not do bad things. You do realize there are sociopaths that aren't murders right? These sociopaths don't need empathy to stop them from doing bad things when rules and laws stop them from doing these things. They think about it logically and realize getting in trouble with the law isn't worth breaking the law especially when there isn't a real reason to. If you can have a sociopath that doesn't do bad things you certainly could program an AI to not do bad things and that would ensure they didn't break those rules. The issue with sociopaths isn't that they lack empathy. It's that they often times murder people. If you put a rules in place for the AI then you don't have to worry about that issue. 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Mihle said:

I see the discussion is about if we should have AI have full range of "emotions" or no emotions:

But what about give it some "emotions", but not others? Like, definitely do not give it anger.

 

I feel like most likely we would try and replicate how human emotions work so it might be difficult to isolated specific emotions especially when alot of emotions are interrelated. Like anger and sadness are very closely related. Sometimes people mask sadness with anger or sadness makes them angry. 

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Brooksie359 said:

Obviously nothing is perfect but if you program an AI to do something and don't give it emotions you are going to get results that are more predictable.

I'd say that because we interact with humans on a daily basis, we can predict human behaviors pretty well. If we base an AI off of a full human, (with emotions) I feel that it may be more predictable because of how similar it is to a human, when compared to a compiler.  However, we still know very little about the human brain. So my hope is, we will be able to determine what the *perfectly stable human* is, then figure out why, and apply it to an AI that works like a human.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, My_Computer_Is_Trash said:

I'd say that because we interact with humans on a daily basis, we can predict human behaviors pretty well. If we base an AI off of a full human, (with emotions) I feel that it may be more predictable because of how similar it is to a human, when compared to a compiler.  However, we still know very little about the human brain. So my hope is, we will be able to determine what the *perfectly stable human* is, then figure out why, and apply it to an AI that works like a human.

That is crazy idealistic. Also from a programming point of view to add emotions isn't going to be easy to predict how that will effect the outcome while it's very easy to predict if you give them fixed programming without emotions. Also keep in mind that when people get emotional they are often times unpredictable especially when under heightened emotions. If you think you can easily predict the behavior of a highly emotional person then I guess you are more of a psychic than me. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, My_Computer_Is_Trash said:

I'd say that because we interact with humans on a daily basis, we can predict human behaviors pretty well. If we base an AI off of a full human, (with emotions) I feel that it may be more predictable because of how similar it is to a human, when compared to a compiler.  However, we still know very little about the human brain. So my hope is, we will be able to determine what the *perfectly stable human* is, then figure out why, and apply it to an AI that works like a human.

Also can you honestly say that the people you know who are more emotional are easier to predict than those who are more rational? I can say with a very high degree of certainty that it's easier for me to predict the less emotional people in my life. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Brooksie359 said:

Also can you honestly say that the people you know who are more emotional are easier to predict than those who are more rational? I can say with a very high degree of certainty that it's easier for me to predict the less emotional people in my life. 

Rational is not without emotion. The ability to make rational decisions is heavily impacted by what a person holds dear, their values, and their personal experiences. And other than personal experiences, you cannot have a rational human being without values. And values require emotions.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, My_Computer_Is_Trash said:

Rational is not without emotion. The ability to make rational decisions is heavily impacted by what a person holds dear, their values, and their personal experiences. And other than personal experiences, you cannot have a rational human being without values. And values require emotions.

Explain how sociopaths can think rationally then? You clearly are overestimating emotions. Most of the time emotions get in the way of rational thought and it's our rational thoughts that stop our emotions from taking us over. When you get angry it isn't the anger emotion that helps you think rationally obviously. Again answer my question. Is it easier to predict the more emotional people in your life or the more rational ones? I bet it's the rational ones. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Brooksie359 said:

Explain how sociopaths can think rationally then? You clearly are overestimating emotions. Most of the time emotions get in the way of rational thought and it's our rational thoughts that stop our emotions from taking us over. When you get angry it isn't the anger emotion that helps you think rationally obviously. Again answer my question. Is it easier to predict the more emotional people in your life or the more rational ones? I bet it's the rational ones. 

 

27 minutes ago, My_Computer_Is_Trash said:

So my hope is, we will be able to determine what the *perfectly stable human* is, then figure out why, and apply it to an AI that works like a human.

 

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, My_Computer_Is_Trash said:

 

 

Ok so you are just going to disregard the issue of adding emotions to AI but just saying "perfectly stable AI with emotions". I'm sorry but that is completely dodging all questions and assuming that having the AI with emotions is good based on a false premise. I want an AI that is perfect without emotions in the future. Bam now I am right because look I said I want a perfect AI without emotions. I'm sorry but we have to deal with reality and you can bet that before we would ever get to perfectly stable emotional we would have to deal with a whole host of unstable emotional AI. And that is even idealistic. More likely we would never reach your so called perfectly stable emotional AI. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Brooksie359 said:

Ok so you are just going to disregard the issue of adding emotions to AI but just saying "perfectly stable AI with emotions". I'm sorry but that is completely dodging all questions and assuming that having the AI with emotions is good based on a false premise. I want an AI that is perfect without emotions in the future. Bam now I am right because look I said I want a perfect AI without emotions. I'm sorry but we have to deal with reality and you can bet that before we would ever get to perfectly stable emotional we would have to deal with a whole host of unstable emotional AI. And that is even idealistic. More likely we would never reach your so called perfectly stable emotional AI. 

I am not trying to dodge questions. I'm saying, we can change the AI's emotions to make it life like, have a sense of right and wrong, and just remove anger if that is a viable option in the future.

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, My_Computer_Is_Trash said:

I am not trying to dodge questions. I'm saying, we can change the AI's emotions to make it life like, have a sense of right and wrong, and just remove anger if that is a viable option in the future.

Yes you are saying we can when you have no idea if it's possible. You are throwing caution to the wind in the name of "we will figure it out". I'm sorry but I would rather think of the issue of doing something before doing it. You can do what you want but I sure hope we don't cause a disaster because an AI went crazy due to someone trying to add emotions to it. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Brooksie359 said:

Yes you are saying we can when you have no idea if it's possible. You are throwing caution to the wind in the name of "we will figure it out". I'm sorry but I would rather think of the issue of doing something before doing it. You can do what you want but I sure hope we don't cause a disaster because an AI went crazy due to someone trying to add emotions to it. 

What do you want me to say? Yes this technology is here right now and we know exactly how the human brain works and we've already figured it out?

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, My_Computer_Is_Trash said:

What do you want me to say? Yes this technology is here right now and we know exactly how the human brain works and we've already figured it out?

No we haven't. Do you see an AI with the ability to feel emotions and no right and wrong? Also to say we know exactly how the brain works is crazy arrogant. We haven't even scratched the surface of how the brain works and we are actively trying to figure out more about it. To say we know how the brain works is like saying we know how the universe works. Sure we know some information but we sure don't know everything. I'm sorry but you are again just stating things like creating an AI with stable emotions that won't cause issue without any real evidence this is even possible. I mean if we can't even figure out how to properly treat alot of mental illnesses you really think we would be able to unsure AI with emotions would go haywire? Also I realize there are alot of treatments for mental issues but often times it's crazy complicated and not easy to fix. Now imagine AI developing mental health issue because you added emotions and then does some of the horrible things that some people with mental health issues do. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Brooksie359 said:

No we haven't. Do you see an AI with the ability to feel emotions and no right and wrong? Also to say we know exactly how the brain works is crazy arrogant. We haven't even scratched the surface of how the brain works and we are actively trying to figure out more about it. To say we know how the brain works is like saying we know how the universe works. Sure we know some information but we sure don't know everything. I'm sorry but you are again just stating things like creating an AI with stable emotions that won't cause issue without any real evidence this is even possible. I mean if we can't even figure out how to properly treat alot of mental illnesses you really think we would be able to unsure AI with emotions would go haywire? Also I realize there are alot of treatments for mental issues but often times it's crazy complicated and not easy to fix. Now imagine AI developing mental health issue because you added emotions and then does some of the horrible things that some people with mental health issues do. 

I'll make my previous response clearer.

 

What do you want me to say? Do you want me to say, "Yes this technology is here right now and we know exactly how the human brain works and we've already figured it out?"

Omg, it's a signature!

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×