Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Real Life J.A.R.V.I.S.

cluelessgenius
 Share

Sorry for the clickbait title but id like for this to be longer discussion with alot of input from people.

 

how long until with get a real life version of Jarvis ? you tink alexa and google assistant are on their way towards that? i mean like logical reasoning and understanding context are really hard to do. 

 

i personally cant figure out why we arent there yet. its like all the building blocks are there. we have more types of sensors in smart home than realisticly needed and actuators too. now we need the brain to control it all i dont mean the trained monkey we have now. say this phrase = do this code. i mean ...well...watch iron man and youll know what i mean.

 

interested in other peoples opinions on this

"I know its stupidly overdone and unreasonably unneccesary but wouldnt it be awesome if ..."

 

CPU: Ryzen 5600G (watercooled) Cooling: 2x XSPC TX240  MB: Gigabyte B550i  RAM: 32GB 3600Mhz Corsair LPX GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled)  DISPLAY: LG 32GN600-B SSD(OS): Samsung 960 EVO 250GB SSD(Games): Corsair MP510 960GB SSD(Applikations): Samsung 850 EVO 500GB  HDD(Scratch): WD Blue 500GB HDD(Downloads): WD Blue 320GB  PSU: Corsair SF750 Case: Hyte Revolt 3 Mouse: Logitech MX Master Keyboard: Logitech G513 Carbon

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, cluelessgenius said:

Sorry for the clickbait title but id like for this to be longer discussion with alot of input from people.

 

how long until with get a real life version of Jarvis ? you tink alexa and google assistant are on their way towards that? i mean like logical reasoning and understanding context are really hard to do. 

 

i personally cant figure out why we arent there yet. its like all the building blocks are there. we have more types of sensors in smart home than realisticly needed and actuators too. now we need the brain to control it all i dont mean the trained monkey we have now. say this phrase = do this code. i mean ...well...watch iron man and youll know what i mean.

 

interested in other peoples opinions on this

Siri, Google Assistant and Alexa are still quite primitive in what they can do. Sure, Google are working on that AI thing that can book appointments for you, but even then it might not go well. 

AI assistants still have a long way to go.. I'd say JARVIS could exist in real life around 2020 at the earliest. If a company had the budget and time it would be possible.

As you said, we have the building blocks in place, it's just that companies don't have the budget, time and need to create such an advanced AI assistant..

Link to comment
Share on other sites

Link to post
Share on other sites

Case: Corsair 760T  |  Psu: Evga  650w p2 | Cpu-Cooler : Noctua Nh-d15 | Cpu : 8600k  | Gpu: Gygabyte 1070 g1 | Ram: 2x8gb Gskill Trident-Z 3000mhz |  Mobo : Aorus GA-Z370 Gaming K3 | Storage : Ocz 120gb sata ssd , sandisk 480gb ssd , wd 1gb hdd | Keyboard : Corsair k95 rgb plat. | Mouse : Razer deathadder elite | Monitor: Dell s2417DG (1440p 165hz gsync) & a crappy hp 24' ips 1080p | Audio: Schiit stack + Akg k712pro + Blue yeti.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Peskanova said:

well no. i was interested in opinions of people on this forum and not just read about it.

"I know its stupidly overdone and unreasonably unneccesary but wouldnt it be awesome if ..."

 

CPU: Ryzen 5600G (watercooled) Cooling: 2x XSPC TX240  MB: Gigabyte B550i  RAM: 32GB 3600Mhz Corsair LPX GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled)  DISPLAY: LG 32GN600-B SSD(OS): Samsung 960 EVO 250GB SSD(Games): Corsair MP510 960GB SSD(Applikations): Samsung 850 EVO 500GB  HDD(Scratch): WD Blue 500GB HDD(Downloads): WD Blue 320GB  PSU: Corsair SF750 Case: Hyte Revolt 3 Mouse: Logitech MX Master Keyboard: Logitech G513 Carbon

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, 1kv said:

Siri, Google Assistant and Alexa are still quite primitive in what they can do. Sure, Google are working on that AI thing that can book appointments for you, but even then it might not go well. 

AI assistants still have a long way to go.. I'd say JARVIS could exist in real life around 2020 at the earliest. If a company had the budget and time it would be possible.

As you said, we have the building blocks in place, it's just that companies don't have the budget, time and need to create such an advanced AI assistant..

2020 huh? well thats not that far off. but i fear that might be too early. i dont see it coming in any way close to jarvis befor 2030. i mean ibm has their watson but they seem to be just sitting on it. 

"I know its stupidly overdone and unreasonably unneccesary but wouldnt it be awesome if ..."

 

CPU: Ryzen 5600G (watercooled) Cooling: 2x XSPC TX240  MB: Gigabyte B550i  RAM: 32GB 3600Mhz Corsair LPX GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled)  DISPLAY: LG 32GN600-B SSD(OS): Samsung 960 EVO 250GB SSD(Games): Corsair MP510 960GB SSD(Applikations): Samsung 850 EVO 500GB  HDD(Scratch): WD Blue 500GB HDD(Downloads): WD Blue 320GB  PSU: Corsair SF750 Case: Hyte Revolt 3 Mouse: Logitech MX Master Keyboard: Logitech G513 Carbon

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, cluelessgenius said:

how long until with get a real life version of Jarvis ? you tink alexa and google assistant are on their way towards that? i mean like logical reasoning and understanding context are really hard to do. 

It depends on what all JARVIS can do. If it has any form of general intelligence, then it's a long long ways off. Not only are there technological problems to solve, but moral, ethical, and legal problems to surmount as well, and those take quite a bit longer than technology to advance.

A 2020 estimate is far too soon for an AI system that can do more than one type of task.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, straight_stewie said:

It depends on what all JARVIS can do. If it has any form of general intelligence, then it's a long long ways off. Not only are there technological problems to solve, but moral, ethical, and legal problems to surmount as well, and those take quite a bit longer than technology to advance.

A 2020 estimate is far too soon for an AI system that can do more than one type of task.

well idealy moral and ethical problems it would figure out on its own. i dont remember who told me this but the theory was that it would learn like a newborn child would only much faster obviously. in a way we wouldnt need to code the complexity of a brain but rather build the founding blocks and let it develop itself from there on.

"I know its stupidly overdone and unreasonably unneccesary but wouldnt it be awesome if ..."

 

CPU: Ryzen 5600G (watercooled) Cooling: 2x XSPC TX240  MB: Gigabyte B550i  RAM: 32GB 3600Mhz Corsair LPX GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled)  DISPLAY: LG 32GN600-B SSD(OS): Samsung 960 EVO 250GB SSD(Games): Corsair MP510 960GB SSD(Applikations): Samsung 850 EVO 500GB  HDD(Scratch): WD Blue 500GB HDD(Downloads): WD Blue 320GB  PSU: Corsair SF750 Case: Hyte Revolt 3 Mouse: Logitech MX Master Keyboard: Logitech G513 Carbon

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, cluelessgenius said:

well idealy moral and ethical problems it would figure out on its own. i dont remember who told me this but the theory was that it would learn like a newborn child would only much faster obviously. in a way we wouldnt need to code the complexity of a brain but rather build the founding blocks and let it develop itself from there on.

Not an artificial general intelligence learning those things, there are problems in those areas that will need to be overcome before an artificial general intelligence can be released to the public. 

For example, if such a thing were to commit a crime, is the creator, the owner, or the machine at fault, and why?

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, straight_stewie said:

Not an artificial general intelligence learning those things, there are problems in those areas that will need to be overcome before an artificial general intelligence can be released to the public. 

For example, if such a thing were to commit a crime, is the creator, the owner, or the machine at fault, and why?

well what do you do if a child creates a crime?

"I know its stupidly overdone and unreasonably unneccesary but wouldnt it be awesome if ..."

 

CPU: Ryzen 5600G (watercooled) Cooling: 2x XSPC TX240  MB: Gigabyte B550i  RAM: 32GB 3600Mhz Corsair LPX GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled)  DISPLAY: LG 32GN600-B SSD(OS): Samsung 960 EVO 250GB SSD(Games): Corsair MP510 960GB SSD(Applikations): Samsung 850 EVO 500GB  HDD(Scratch): WD Blue 500GB HDD(Downloads): WD Blue 320GB  PSU: Corsair SF750 Case: Hyte Revolt 3 Mouse: Logitech MX Master Keyboard: Logitech G513 Carbon

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, cluelessgenius said:

well what do you do if a child creates a crime?

It depends on the situation, but normally the child is punished and not the parents, if anyone is punished at all. (edit, at least in the us)

But an artificially intelligent machine is not a human child, is it?

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, straight_stewie said:

It depends on the situation, but normally the child is punished and not the parents, if anyone is punished at all. (edit, at least in the us)

But an artificially intelligent machine is not a human child, is it?

well if its learning about the world the way a child would , it would be wouldnt it. lets say the ai opened your front door to the mail man because it saw people coming up the drive way. like a child might would. thats not bad intent it just doesnt know any better yet. 

then people say what if it makes a mistake and as a result people die like idk lets say not opening the door while the house is on fire. my simple thought was - dont put it in charge of that then. humans dont get to be a danger on the road until they are 17-18 to make sure they are at least somewhat mature enough to handle the responsability. why would we handle machines differently? dont give them live or death power until they have proven trustworthy enough to handle it. let it make your coffee before you let it control a nuclear reactor or drive your car for you.

as far as punishment goes. thats diffiult. for kids to learn parents will often either emply positive or negative reinforment. mening either you get a cookie or an ass whooping. how do you do either to an AI? hhmm tough indeed

"I know its stupidly overdone and unreasonably unneccesary but wouldnt it be awesome if ..."

 

CPU: Ryzen 5600G (watercooled) Cooling: 2x XSPC TX240  MB: Gigabyte B550i  RAM: 32GB 3600Mhz Corsair LPX GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled)  DISPLAY: LG 32GN600-B SSD(OS): Samsung 960 EVO 250GB SSD(Games): Corsair MP510 960GB SSD(Applikations): Samsung 850 EVO 500GB  HDD(Scratch): WD Blue 500GB HDD(Downloads): WD Blue 320GB  PSU: Corsair SF750 Case: Hyte Revolt 3 Mouse: Logitech MX Master Keyboard: Logitech G513 Carbon

 

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, cluelessgenius said:

 as far as punishment goes. thats diffiult. for kids to learn parents will often either emply positive or negative reinforment. mening either you get a cookie or an ass whooping. 

That's not quite how that whole reinforcement vs. punishment thing works:

  • Positive Reinforcement = Introducing a positive stimulus as a reward: Giving a child a cookie after doing the dishes.
  • Negative Reinforcement = Removing a negative stimulus as a reward: Removing some normally on TV Channel locks as a reward for mowing the lawn.
  • Positive Punishment = Introducing a negative stimulus as a deterrent: Spanking a child after they purposefully break the television.
  • Negative Punishment = Removing a positive stimulus as a deterrent: Grounding a child.

Four general rules of thumb to help determine what things are:

  • Positive = introduction of stimulus
  • Negative = removal of stimulus
  • Reinforcement = reward. Used to increase the chances of the behavior reoccurring.
  • Punishment = penalty. Used to decrease the chances of the behavior reoccurring. (you may find the effectiveness of this widely debated, even among professionals).

It is essential to understand things like this. Basic knowledge of introductory psychology is an absolute requirement to have intelligent debate about the future of an AI with general intelligence.
 

42 minutes ago, cluelessgenius said:

how do you do either to an AI?

Actually, for neural networks, introducing negative or positive stimuli is one of the foundational concepts required to even get the thing to work. It's part of training them.

For example, let's assume that we have a simple neural network meant to classify images of handwritten digits, 0-9. In order to train the thing to work right, you might give it a score of "0" every time it guesses right, and a score of <correct answer - given answer> when it guesses incorrectly. Then, we write an algorithm that attempts to set the appropriate values of the neural network such that it receives a score of 0 every time it makes a guess.

As far as a general intelligence goes, it could be quite simple, or quite complicated. We don't know yet. But, on the user end, it might be as simple as selecting from a list which decision was wrong and pressing a button to tell it so.
 

42 minutes ago, cluelessgenius said:

humans dont get to be a danger on the road until they are 17-18 to make sure they are at least somewhat mature enough to handle the responsability. why would we handle machines differently?

Except they do. Ostensibly, all you have to know to be able to legally drive a car is how to see, and which light means go. That's why there are so many car accidents. In fact, this happens all the time. Humans are always being put into responsibility over things that they have no business being responsible for. Some adapt and learn, some rise to mediocrity. Sure, we try to take steps to make sure that they have a reasonable chance at success, but that's just it: The success of a human conducting new tasks is never guaranteed, even after intense and thorough training.

And we do. We handle machines differently all the time. Remember a few years ago when electronic Toyota gas pedals were malfunctioning and telling the ECM that they were at full throttle? There were entire ultra-thorough investigations from multiple third parties (including the government), Toyota themselves, and millions and millions of dollars of court payouts. Humans expect machines to function safely and perfectly all the time. It's even written into law that if an airliner sized aircraft crashes, two independent government organizations, the manufacturer, and the airline are required to conduct independent investigations and release their findings and a solution to protect human safety in a timely manner. Winchester Firearms was even successfully sued, by firearm enthusiasts no less, for manufacturing firearms with faulty triggers that would allow the firearm to be fired too easily, even though no one was injured by the defect. 

Think about yourself; If you get a bad set of brand new RAM sticks, do you just go "oh well, guess that's my fault for ordering them", or do you contact the manufacturer and RMA them to either get new ones or get your money back?

So, what society and law will have to do before a general AI can be released to the public is to decide: Should any machine ever be classified (at least in terms of the law) as a human? If the answer is anything other than completely yes, we will never get such technology. If the answer is yes, then we will, provided that we can actually solve the technological problems as well.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, straight_stewie said:

That's not quite how that whole reinforcement vs. punishment thing works:

  • Positive Reinforcement = Introducing a positive stimulus as a reward: Giving a child a cookie after doing the dishes.
  • Negative Reinforcement = Removing a negative stimulus as a reward: Removing some normally on TV Channel locks as a reward for mowing the lawn.
  • Positive Punishment = Introducing a negative stimulus as a deterrent: Spanking a child after they purposefully break the television.
  • Negative Punishment = Removing a positive stimulus as a deterrent: Grounding a child.

Four general rules of thumb to help determine what things are:

  • Positive = introduction of stimulus
  • Negative = removal of stimulus
  • Reinforcement = reward. Used to increase the chances of the behavior reoccurring.
  • Punishment = penalty. Used to decrease the chances of the behavior reoccurring. (you may find the effectiveness of this widely debated, even among professionals).

It is essential to understand things like this. Basic knowledge of introductory psychology is an absolute requirement to have intelligent debate about the future of an AI with general intelligence.

what a mighty high horse you have, good sir. i get what your saying and i agree about the definitions but lecturing me about semantics hardly matters. sure i made a mistake in syntax. lets call it reinforcement and punishment then since is argue that positive and negative in this cse are essentiually the same thing.

Quote

Except they do. Ostensibly, all you have to know to be able to legally drive a car is how to see, and which light means go.

excuse me? where de hell you live? normal pepole have to go through theoretical / practical testing bfore theyre allowed to drive and even then there a couple of years probabtion where punishments are way more intense then they are for experienced drivers.

Quote

That's why there are so many car accidents. In fact, this happens all the time. Humans are always being put into responsibility over things that they have no business being responsible for. Some adapt and learn, some rise to mediocrity. Sure, we try to take steps to make sure that they have a reasonable chance at success, but that's just it: The success of a human conducting new tasks is never guaranteed, even after intense and thorough training.

sure but my point was that we dont take some random "kyle" from a high school and put him in charge of national security. of course people make mistakes and theres always risk. all im saying is that everyone expect AI to be this fully fleshed out grown up intelligence from the start up. but you cant treat it like that. there will need to be tests and evaluations to determine which tasks it is allowed to perform. just like humans by the way. going to college, taking exams or even the simple emergency alarm training at work. all tests to determine if everyone still understands the rules

Quote

And we do. We handle machines differently all the time. Remember a few years ago when electronic Toyota gas pedals were malfunctioning and telling the ECM that they were at full throttle? There were entire ultra-thorough investigations from multiple third parties (including the government), Toyota themselves, and millions and millions of dollars of court payouts. Humans expect machines to function safely and perfectly all the time. It's even written into law that if an airliner sized aircraft crashes, two independent government organizations, the manufacturer, and the airline are required to conduct independent investigations and release their findings and a solution to protect human safety in a timely manner. Winchester Firearms was even successfully sued, by firearm enthusiasts no less, for manufacturing firearms with faulty triggers that would allow the firearm to be fired too easily, even though no one was injured by the defect. 

kind of of what im saying. needs testing

Quote

Think about yourself; If you get a bad set of brand new RAM sticks, do you just go "oh well, guess that's my fault for ordering them", or do you contact the manufacturer and RMA them to either get new ones or get your money back?

So, what society and law will have to do before a general AI can be released to the public is to decide: Should any machine ever be classified (at least in terms of the law) as a human? If the answer is anything other than completely yes, we will never get such technology. If the answer is yes, then we will, provided that we can actually solve the technological problems as well.

ehhh. i get what youre saying but i disagree. government  and lawmakers have become one of the slowest moving organs in our society. and scientists dont usually wait for that.i mean  the law hasnt even figured out the internet as a concept. putting country restrictions, suing twitch streamers for not having a tv broadcasting licence, still trying to maintane copyright laws, etc. if it happens i garantuee someone will just create it and then absurdly oldfashioned laws will be hastely put in place as soon as something goes wrong, look at self driving cars. laws arent ready for that but it still exists and they make shit up as they go.

"I know its stupidly overdone and unreasonably unneccesary but wouldnt it be awesome if ..."

 

CPU: Ryzen 5600G (watercooled) Cooling: 2x XSPC TX240  MB: Gigabyte B550i  RAM: 32GB 3600Mhz Corsair LPX GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled)  DISPLAY: LG 32GN600-B SSD(OS): Samsung 960 EVO 250GB SSD(Games): Corsair MP510 960GB SSD(Applikations): Samsung 850 EVO 500GB  HDD(Scratch): WD Blue 500GB HDD(Downloads): WD Blue 320GB  PSU: Corsair SF750 Case: Hyte Revolt 3 Mouse: Logitech MX Master Keyboard: Logitech G513 Carbon

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, cluelessgenius said:

what a mighty high horse you have, good sir. i get what your saying and i agree about the definitions but lecturing me about semantics hardly matters. sure i made a mistake in syntax. lets call it reinforcement and punishment then since is argue that positive and negative in this case are essentiually the same thing.

If expecting people who wish to have intelligent, useful debate to know what they are talking about and to be precise in their statements means I am sitting on a high horse then highly I sit good sir.

A mistake in syntax is words out of order or incorrect punctuation. Claiming that spanking a child is negative reinforcement is not only false, the two are complete opposites: Spanking a child is positive punishment. As I already indicated, this distinction is one of the fundamental ideas when talking about a general intelligence of any type, and that figuring out how to get an artificial intelligence to respond to such inputs is one of the major problems that will need to be overcome before such a thing can even exist. When we are trying to discuss what hurdles must be overcome to arrive at a technology, semantics are all that matters; The difference between "kinda hard" and "very hard" can add decades to this kind of research.

A worthwhile debate can never be had so long as disagreements in ideas can always be deferred by claiming that they are semantical errors.
 

 

31 minutes ago, cluelessgenius said:

kind of of what im saying. needs testing

This kind of software, much like the human mind, is non deterministic. That means that it's output can never be guaranteed, no matter how much testing you do.

 

32 minutes ago, cluelessgenius said:

excuse me? where de hell you live? normal pepole have to go through theoretical / practical testing bfore theyre allowed to drive and even then there a couple of years probabtion where punishments are way more intense then they are for experienced drivers.

Yes. And what do those theoretical tests entail? Mostly things that middle schoolers know, like what shape are stop signs, what's a speed limit sign, what side of the road to drive on. The practical test is practically useless in my opinion. I've yet to hear of a single state that requires potential drivers to be tested on accident evasion, situational awareness, or even simply avoiding a slide on a wet road. All that the practical tests actually test is your ability to negotiate a corner, and to stop the car in the best case scenario (and additionally sometimes parking into spaces).

In other words, all that driving exams test in the United States is the absolute minimum skill set needed to make a vehicle move, and follow only the most basic of laws such as follow the speed limit. Pandering to the lowest common denominator never produces the highest quality result, it always produces the highest volume result.

 

43 minutes ago, cluelessgenius said:

government  and lawmakers have become one of the slowest moving organs in our society. and scientists dont usually wait for that.

You're right. They don't, for research and the release of low risk products. For the release of high risk projects, like artificial humans with full freedom and rights, businesses care about their liability alot. In fact, it's in the top three highest considerations for product releases.

 

But yes, governments and lawmakers are extremely slow moving. We can both agree there, this is one of my biggest gripes with how things work. They are also always taking decades to make nonsensical laws about things that they have no idea what they are talking about. But that's a major reason why an estimate of 2020 for such a technology is far to optimistic.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, straight_stewie said:

If expecting people who wish to have intelligent, useful debate to know what they are talking about and to be precise in their statements means I am sitting on a high horse then highly I sit good sir.

A mistake in syntax is words out of order or incorrect punctuation. Claiming that spanking a child is negative reinforcement is not only false, the two are complete opposites: Spanking a child is positive punishment. As I already indicated, this distinction is one of the fundamental ideas when talking about a general intelligence of any type, and that figuring out how to get an artificial intelligence to respond to such inputs is one of the major problems that will need to be overcome before such a thing can even exist. When we are trying to discuss what hurdles must be overcome to arrive at a technology, semantics are all that matters; The difference between "kinda hard" and "very hard" can add decades to this kind of research.

A worthwhile debate can never be had so long as disagreements in ideas can always be deferred by claiming that they are semantical errors.

i apologize deeply. seeing as english is not my first language and im generally not classically educated that well i was hoping the general idea of what im saying would come across and that the missing precision of my exact words could have been overlooked.

 

in short : b**** dude, you know what i meant.

and you did didnt you? you knew exactly what i was trying to say 

26 minutes ago, straight_stewie said:

Yes. And what do those theoretical tests entail? Mostly things that middle schoolers know, like what shape are stop signs, what's a speed limit sign, what side of the road to drive on. The practical test is practically useless in my opinion. I've yet to hear of a single state that requires potential drivers to be tested on accident evasion, situational awareness, or even simply avoiding a slide on a wet road. All that the practical tests actually test is your ability to negotiate a corner, and to stop the car in the best case scenario (and additionally sometimes parking into spaces).

In other words, all that driving exams test in the United States is the absolute minimum skill set needed to make a vehicle move, and follow only the most basic of laws such as follow the speed limit. Pandering to the lowest common denominator never produces the highest quality result, it always produces the highest volume result.

i...i dont even know where to start on this...soo..agree to disagree? or maybe....my condolences for the american educa...no wait....infrastruc...no....health...man what system does work over there?

 

29 minutes ago, straight_stewie said:

You're right. They don't, for research and the release of low risk products. For the release of high risk projects, like artificial humans with full freedom and rights, businesses care about their liability alot. In fact, it's in the top three highest considerations for product releases.

 

But yes, governments and lawmakers are extremely slow moving. We can both agree there, this is one of my biggest gripes with how things work. They are also always taking decades to make nonsensical laws about things that they have no idea what they are talking about. But that's a major reason why an estimate of 2020 for such a technology is far to optimistic.

well at least we got something we agree on. my guess is elon will release AI in his own mars state before we get governments to come to a decision about what to do about it on earth

 

also im not neccessaryly talking full "Detroit: Become  Human" - style. imo they dont need bodies at all . i just mean a control software for personal homes, infrastructure and the sorts that makes logical decisions and understands relations and context of things. 

more specifically i was looking around what kind of home automation systems are on the market right now and it started with me wondering why theres not camera based systems. we got actuators like motors, fans, and switches, that control sunbliinds, light, doors, aircon, but you still need to configure it all from a tablet ie push a button or configure fixed routines. so i was wondering first why not track people in the house and use that data as well alongside recognition identifying them that would make the whole system way better. ...and then i drifted towards way not skip it all and get jarvis. well that was my train of thought here.

"I know its stupidly overdone and unreasonably unneccesary but wouldnt it be awesome if ..."

 

CPU: Ryzen 5600G (watercooled) Cooling: 2x XSPC TX240  MB: Gigabyte B550i  RAM: 32GB 3600Mhz Corsair LPX GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled)  DISPLAY: LG 32GN600-B SSD(OS): Samsung 960 EVO 250GB SSD(Games): Corsair MP510 960GB SSD(Applikations): Samsung 850 EVO 500GB  HDD(Scratch): WD Blue 500GB HDD(Downloads): WD Blue 320GB  PSU: Corsair SF750 Case: Hyte Revolt 3 Mouse: Logitech MX Master Keyboard: Logitech G513 Carbon

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

logic alone cannot describe the behavior of consciousness because the seat of all awareness is the profoundly irrational unconscious, and the unconscious is an unknowable essence science cannot describe the behavior of,

for an artificial intelligence to be fully intelligent it would need to be constructed from autonomous irrational systems and not bound by the artificial rules of logic, which would mean it would most likely not care for us or our methods of communication

 

SAL-9000: "Will I dream?"

Link to comment
Share on other sites

Link to post
Share on other sites

Currently building Ultron in my garage.  Scheduled release date is March 2025.  

"And I'll be damned if I let myself trip from a lesser man's ledge"

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Velcade said:

Currently building Ultron in my garage.  Scheduled release date is March 2025.  

you saying you got an infinity stone laying around? :D

"I know its stupidly overdone and unreasonably unneccesary but wouldnt it be awesome if ..."

 

CPU: Ryzen 5600G (watercooled) Cooling: 2x XSPC TX240  MB: Gigabyte B550i  RAM: 32GB 3600Mhz Corsair LPX GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled)  DISPLAY: LG 32GN600-B SSD(OS): Samsung 960 EVO 250GB SSD(Games): Corsair MP510 960GB SSD(Applikations): Samsung 850 EVO 500GB  HDD(Scratch): WD Blue 500GB HDD(Downloads): WD Blue 320GB  PSU: Corsair SF750 Case: Hyte Revolt 3 Mouse: Logitech MX Master Keyboard: Logitech G513 Carbon

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, cluelessgenius said:

you saying you got an infinity stone laying around? :D

Shhhhhh... ;)

"And I'll be damned if I let myself trip from a lesser man's ledge"

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Velcade said:

Shhhhhh... ;)

2nn0x2.jpg

"I know its stupidly overdone and unreasonably unneccesary but wouldnt it be awesome if ..."

 

CPU: Ryzen 5600G (watercooled) Cooling: 2x XSPC TX240  MB: Gigabyte B550i  RAM: 32GB 3600Mhz Corsair LPX GPU: MSI GTX1080 Ti Aero @ 2 GHz (watercooled)  DISPLAY: LG 32GN600-B SSD(OS): Samsung 960 EVO 250GB SSD(Games): Corsair MP510 960GB SSD(Applikations): Samsung 850 EVO 500GB  HDD(Scratch): WD Blue 500GB HDD(Downloads): WD Blue 320GB  PSU: Corsair SF750 Case: Hyte Revolt 3 Mouse: Logitech MX Master Keyboard: Logitech G513 Carbon

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, cluelessgenius said:

so i was wondering first why not track people in the house and use that data as well alongside recognition identifying them that would make the whole system way better

Oh. That's already possible. Allegedly Bill Gates regular guests get RFID tags. The automation system ranks them based on some "importance" hierarchy that he has, and the rooms adjust temperatures, light levels, route phone calls, even adjusts the art in rooms and hallways to the highest ranked person in the room.

It's not quite smart enough to respond to voice commands and such (the system was built 20 years ago after all), but it's a pretty good start.

 

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

nope.

 

JARVIS means Ultron

Ultron means Vision

Vision means Mind Stone

Mind Stone means infinity stones.

Infinity Stones means Thanos

Thanos means 1:2 life

1:2 life means prosperity for the planet and no more overpopulation 

 

 

wait, yes, let's make that happen

🌲🌲🌲

Judge the product by its own merits, not by the Company that created it.

Link to comment
Share on other sites

Link to post
Share on other sites

By 2020? No way.

 

There's a big difference between a computer system that can look something up on wikipedia and read you the first line of the summary (Google, siri, etc.) and one that can look at that same page and actually find the details you want. Writing a program to respond to vague vocal prompts is extremely difficult. Having that same program respond with sarcasm and humor is even MORE difficult. 

 

Just think about it for a second, currently you can only ask those devices simple questions. "What's the height of the hoover dam?" etc. But what if you wanted to ask "Why is the hoover dam as tall as it is? or "Why did people die when building the dam" you likely won't get an answer. The programs simply can't parse that much information quickly enough and intelligently enough to search for the words "death, die, died" etc like a human would, then read the correct sentences back. You can currently ask "Who, what, where, when" but not "Why or How". Unfortunately, the why and how of things are often the hardest to explain. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×