Jump to content

Xbox One has implementation of hUMA memory system, just like PS4, say dev

snowComet

So your telling me I can have a thin and long battery life razer blade

depending on how quick the tech is adapted, yes.

 

but consider this. doing something like cod with all sky assisted servers would be a MASSIVE hit to ones budget. i wouldn't imagine it will be adapted too quickly, simply because of budget. probably within the next 5 years i'd say some larger companies would start using it.

Link to comment
Share on other sites

Link to post
Share on other sites

depending on how quick the tech is adapted, yes.

 

but consider this. doing something like cod with all sky assisted servers would be a MASSIVE hit to ones budget. i wouldn't imagine it will be adapted too quickly, simply because of budget. probably within the next 5 years i'd say some larger companies would start using it.

That's probably true. :(

 

I do have to disagree with the length though I think it will be 2 to 3 years not 5 simply because the have the funds for it they just need to implement it and gain support for it.

Back from the dead....

Link to comment
Share on other sites

Link to post
Share on other sites

So its a like a kaveri with a older CPU!

The only way to deal with an unfree world is to become so absolutely free that your very existence is an act of rebellion.

Link to comment
Share on other sites

Link to post
Share on other sites

Yep cloud gaming is probably what will be their performance upgrades. So, these mid-range PCs will still have after launch performance grow as AMD Sky launches and is emplimented into Xbox Live and PSN. So, they can probably do 4k cloud gaming at 30 fps or so depending on the server and what GPU is in the rack.

 

Are you crazy? 4k gaming over cloud? Man that is some wet dreams you have there.

 

4k is at least 4x1080p. 4x a screen of 1920x1080 combined. That requires a beastly bandwidth and currently no PHYSICAL solution has it there except for DVI something.

 

And you want to send that over a freakin internet 30 times per second? mwhahahaha good luck.

So... If Jesus had the gold, would he buy himself out instead of waiting 3 days for the respawn?

CPU: Phenom II x6 1045t ][ GPU: GeForce 9600GT 512mb DDR3 ][ Motherboard: Gigabyte GA-MA770T-UD3P ][ RAM: 2x4GB Kingston 1333MHz CL9 DDR3 ][ HDD: Western Digital Green 2TB ][ PSU: Chieftec 500AB A ][ Case: No-name without airflow or dust filters Budget saved for an upgrade so far: 2400PLN (600€) - Initial 2800PLN (700€) Upgraded already: CPU

Link to comment
Share on other sites

Link to post
Share on other sites

Also the jaguar cores boost to 1.8ghz so that causes them to pull even further ahead ^_^ , Parralellism is a great concept and Intel recognise it's importance (hyper-threading) but don't actually have anything besides 2011 (and even that is quite limited :/ ) to address moar cores = moar better :/ because in games hyper-threading is useless they need more cores not more threads ^_^ so I'm expecting once the console optimisations hit 6 core mainstream and 8 core extreme

lol. I'm gonna be happy once it reaches 4 cores. It isn't easy to optimize a game for 8 cores. And Intel is probably never going to have 8 cores because that means 16 threads. No way is Intel going to spend the same amount of money it costs to make a $1000 xeon for a $300 i7. Intel and amd have 8 threads already, and that is the max that consoles can even be optimized for (which they won't. I'd be hopeful to expect 5-6 threads being utilized.). Better optimization? kind of.  More cores? No. . .

Finally my Santa hat doesn't look out of place

Link to comment
Share on other sites

Link to post
Share on other sites

lol. I'm gonna be happy once it reaches 4 cores. It isn't easy to optimize a game for 8 cores. And Intel is probably never going to have 8 cores because that means 16 threads. No way is Intel going to spend the same amount of money it costs to make a $1000 xeon for a $300 i7. Intel and amd have 8 threads already, and that is the max that consoles can even be optimized for (which they won't. I'd be hopeful to expect 5-6 threads being utilized.). Better optimization? kind of.  More cores? No. . .

no, they do not. amd has 8 cores. not 8 threads. threads < cores

 

and stop typing about as if you know anything about computer architecture. nor a games architecture for that matter. i could optimize a game for 16 core dual cpu systems if i wanted.

Link to comment
Share on other sites

Link to post
Share on other sites

lol. I'm gonna be happy once it reaches 4 cores. It isn't easy to optimize a game for 8 cores. And Intel is probably never going to have 8 cores because that means 16 threads. No way is Intel going to spend the same amount of money it costs to make a $1000 xeon for a $300 i7. Intel and amd have 8 threads already, and that is the max that consoles can even be optimized for (which they won't. I'd be hopeful to expect 5-6 threads being utilized.). Better optimization? kind of.  More cores? No. . .

no, they do not. amd has 8 cores. not 8 threads. threads < cores

 

and stop typing about as if you know anything about computer architecture. nor a games architecture for that matter. i could optimize a game for 16 core dual cpu systems if i wanted.

It seems everyone is seeing multi-threading optimisation as this kind of talent that only godly programmers have ._. it's not, get out of your mind the idea that it's hard; the only problem is that it's long to do and extends the time of doing things to about 5-6times as long so when you're a game company paying the hour that is a lot of profit missed out on and as long as people continue to buy the badly optimised games they'll continue to think it's okay to do something like that.

 

The moment you've split the initial operations into 2 threads the rest of it from there is MUCH easier, I can't stress this enough.

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

It seems everyone is seeing multi-threading optimisation as this kind of talent that only godly programmers have ._. it's not, get out of your mind the idea that it's hard; the only problem is that it's long to do and extends the time of doing things to about 5-6times as long so when you're a game company paying the hour that is a lot of profit missed out on and as long as people continue to buy the badly optimised games they'll continue to think it's okay to do something like that.

 

The moment you've split the initial operations into 2 threads the rest of it from there is MUCH easier, I can't stress this enough.

Yes but so far never ever in the game development history was at least 1 truly multiple-threaded game created. Never. John Carmack talked about this in his keynote recently, he tried to make an old game (i think quake or doom?) as multi-threaded game (TRULY multi threaded, like each character AI is different thread).

 

He COULDN'T do it. He said it should be possible but he hasn't worked/thought on this long enough. The problem is synchronising all this sh-t you get going for a playable 60 frames per second while sending all of this to a GPU etc. Really hard and damn out of our reach until some brilliant programmer (yes a Godlike skilled) (maybe it will be John Carmack one day) makes the first truly multi-threaded game. From there everything will snowball and be much easier.

So... If Jesus had the gold, would he buy himself out instead of waiting 3 days for the respawn?

CPU: Phenom II x6 1045t ][ GPU: GeForce 9600GT 512mb DDR3 ][ Motherboard: Gigabyte GA-MA770T-UD3P ][ RAM: 2x4GB Kingston 1333MHz CL9 DDR3 ][ HDD: Western Digital Green 2TB ][ PSU: Chieftec 500AB A ][ Case: No-name without airflow or dust filters Budget saved for an upgrade so far: 2400PLN (600€) - Initial 2800PLN (700€) Upgraded already: CPU

Link to comment
Share on other sites

Link to post
Share on other sites

Are you crazy? 4k gaming over cloud? Man that is some wet dreams you have there.

 

4k is at least 4x1080p. 4x a screen of 1920x1080 combined. That requires a beastly bandwidth and currently no PHYSICAL solution has it there except for DVI something.

 

And you want to send that over a freakin internet 30 times per second? mwhahahaha good luck.

 

You are forgetting that DVI is uncompressed while what we will send over the internet will be compressed. Currently my connection can handle 1080p @ 60 FPS content streaming in H264. The new codec standard H265 cuts the required bandwidth in half. That means that 4k gaming will require just twice the bandwidth of 1080p content at the same FPS. My connection can handle 60 FPS, so 30 FPS will be cut down in half again. That means that I need as much bandwidth for 4K 30 FPS content encoded with H265 as I need for 1080p 30 FPS content encoded with H264. I already have that (Eastern Europe, good cheap internet :) ).

 

A sample of 4K video running at 25 FPS. Yes this will require the GPU's in cloud to have hardware encoding but Kepler GPU's already have that and I think Radeon Sky will also have it. This runs fluid on my system and never goes above 25% of my network speed. Of course encode 4K isn't as good as 4K but every movie we see is encoded and most people don't complain.

http://www.youtube.com/watch?v=k_okcNVZqqI

 

There's still an issue of cost, because when you buy a game they will also have to provide you with the GPU basically, currently only expensive GPU's can handle 4K, so that will take some time.

 

no, they do not. amd has 8 cores. not 8 threads. threads < cores

 

and stop typing about as if you know anything about computer architecture. nor a games architecture for that matter. i could optimize a game for 16 core dual cpu systems if i wanted.

 

Did you already do this? So did you solve cache conflicts? If you didn't already do something don't say you can, you will be surprised what happens when you start working with complex systems.

 

It seems everyone is seeing multi-threading optimisation as this kind of talent that only godly programmers have ._. it's not, get out of your mind the idea that it's hard; the only problem is that it's long to do and extends the time of doing things to about 5-6times as long so when you're a game company paying the hour that is a lot of profit missed out on and as long as people continue to buy the badly optimised games they'll continue to think it's okay to do something like that.

 

The moment you've split the initial operations into 2 threads the rest of it from there is MUCH easier, I can't stress this enough.

 

There is one more thing to take into account, not all algorithms can be split effectively for multitasking. There are different reasons for this, some require to know the information of all other steps so they need to be solved in serial, some require a lot of synchronizing between threads and that makes them wait for each other (that's why using 4 cores most of the time just increases performance by 3X times). As Dravic mentioned there's also the problem of balancing the threads. What task do you give each thread to do? Do I set render commands from each of them? But Direct3D and OpenGL have only 1 actual rendering thread, if you call a render command from the other ones it gets send to the render thread first. The problem with making games multithreaded isn't just to write the code to take advantage of multiple cores, but you also have to invent new algorithms that solve the problem in such a way that it can be split.

 

And please don't give the PS3 example. The cell broadband processor works completely different than a normal multicore system.

 

I know about these problems because we worked in the university with this. We had to solve problems using most types of multithreading: multi-core, multi-cpu, multi-pc (small private cloud), blade system (16 racks of dual Cell Broadband processors) and CUDA (single GPU). I haven't worked with multi-gpu systems so I can't comment on SLI/XFire.

Link to comment
Share on other sites

Link to post
Share on other sites

Everyone here is banging on about 4K gaming yet heres me still perfectly happy with my 1600x900 monitor. Probably going to upgrade to a 1920x1080 monitor and be done with it. Seriously... do we even need 4K? 1080p looks perfect to me, I was at a friends house watching Sky HD on his 40 inch TV and it looked amazing. I seriously don't even think 4K will make much of a big difference unless its at something like a cinema and the screen is massive. 

System Specs:

CPU: Ryzen 7 5800X

GPU: Radeon RX 7900 XT 

RAM: 32GB 3600MHz

HDD: 1TB Sabrent NVMe -  WD 1TB Black - WD 2TB Green -  WD 4TB Blue

MB: Gigabyte  B550 Gaming X- RGB Disabled

PSU: Corsair RM850x 80 Plus Gold

Case: BeQuiet! Silent Base 801 Black

Cooler: Noctua NH-DH15

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes but so far never ever in the game development history was at least 1 truly multiple-threaded game created. Never. John Carmack talked about this in his keynote recently, he tried to make an old game (i think quake or doom?) as multi-threaded game (TRULY multi threaded, like each character AI is different thread).

 

He COULDN'T do it. He said it should be possible but he hasn't worked/thought on this long enough. The problem is synchronising all this sh-t you get going for a playable 60 frames per second while sending all of this to a GPU etc. Really hard and damn out of our reach until some brilliant programmer (yes a Godlike skilled) (maybe it will be John Carmack one day) makes the first truly multi-threaded game. From there everything will snowball and be much easier.

i made a school project use all 6 cores. whether it was perfectly threaded or not is up in the air. but it isn't impossible, and if you get 8 people working on the same thing i'm sure it can be done. it's just the fact that most (all) coding is basically done by order. having 2 lines of code operated simultaneously is crazy difficult for most people it seems, and that is the first issue imho to tackle. with amd's huma stuff coming out it's a time where a small minority will possibly look at architecture and code again on a different scale.

Link to comment
Share on other sites

Link to post
Share on other sites

You are forgetting that DVI is uncompressed while what we will send over the internet will be compressed. Currently my connection can handle 1080p @ 60 FPS content streaming in H264. The new codec standard H265 cuts the required bandwidth in half. That means that 4k gaming will require just twice the bandwidth of 1080p content at the same FPS. My connection can handle 60 FPS, so 30 FPS will be cut down in half again. That means that I need as much bandwidth for 4K 30 FPS content encoded with H265 as I need for 1080p 30 FPS content encoded with H264. I already have that (Eastern Europe, good cheap internet :) ).

 

A sample of 4K video running at 25 FPS. Yes this will require the GPU's in cloud to have hardware encoding but Kepler GPU's already have that and I think Radeon Sky will also have it. This runs fluid on my system and never goes above 25% of my network speed. Of course encode 4K isn't as good as 4K but every movie we see is encoded and most people don't complain.

http://www.youtube.com/watch?v=k_okcNVZqqI

 

There's still an issue of cost, because when you buy a game they will also have to provide you with the GPU basically, currently only expensive GPU's can handle 4K, so that will take some time.

 

 

Did you already do this? So did you solve cache conflicts? If you didn't already do something don't say you can, you will be surprised what happens when you start working with complex systems.

 

 

There is one more thing to take into account, not all algorithms can be split effectively for multitasking. There are different reasons for this, some require to know the information of all other steps so they need to be solved in serial, some require a lot of synchronizing between threads and that makes them wait for each other (that's why using 4 cores most of the time just increases performance by 3X times). As Dravic mentioned there's also the problem of balancing the threads. What task do you give each thread to do? Do I set render commands from each of them? But Direct3D and OpenGL have only 1 actual rendering thread, if you call a render command from the other ones it gets send to the render thread first. The problem with making games multithreaded isn't just to write the code to take advantage of multiple cores, but you also have to invent new algorithms that solve the problem in such a way that it can be split.

 

And please don't give the PS3 example. The cell broadband processor works completely different than a normal multicore system.

 

I know about these problems because we worked in the university with this. We had to solve problems using most types of multithreading: multi-core, multi-cpu, multi-pc (small private cloud), blade system (16 racks of dual Cell Broadband processors) and CUDA (single GPU). I haven't worked with multi-gpu systems so I can't comment on SLI/XFire.

I do realise that the initial process of actually figuring out what to put each thread on is quite difficult but what I'm saying is that once you have a structure and have split up a task into 2-4threads then you can simply divide up each of those tasks again and put them on other threads, the whole 1 thread being forced to wait for another is an amazing example of having to think outside of the box ^_^ it's an issue I ran into when I was attempting to create a multi-threaded calculator (don't ask :P ) when I was having a little competition with my friends on making a calculator I made my over the top and implemented a benchmark system and made it use multiple threads but my main issue was that in order to allow very long calculations I was using a function to store the number each time the user clicked any of the buttons and although this worked perfectly fine on a single thread, once I started to use multiple threads the order of the calculation started to get messed up and I was completely stuck for a while :P my solution in the end was to move the if/else statement that decides which variable (i created 64 variables to store each number ._. that was hell to type out) to store the number in to before the function is called and then feed that to the function so that the function knows where to slot itself into so even if another thread is filling in mathnum5 it knows it needs to go into mathnum3 :P this is kind of tl;dr and horribly formatted but I think you get the point, if something that simple had problems then something big like a game must have loads of them and it's true >_< multi-threading is horrible in a game but imo it's worth it.

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

Yes but so far never ever in the game development history was at least 1 truly multiple-threaded game created. Never. John Carmack talked about this in his keynote recently, he tried to make an old game (i think quake or doom?) as multi-threaded game (TRULY multi threaded, like each character AI is different thread).

 

He COULDN'T do it. He said it should be possible but he hasn't worked/thought on this long enough. The problem is synchronising all this sh-t you get going for a playable 60 frames per second while sending all of this to a GPU etc. Really hard and damn out of our reach until some brilliant programmer (yes a Godlike skilled) (maybe it will be John Carmack one day) makes the first truly multi-threaded game. From there everything will snowball and be much easier.

Assigning each AI a different thread is silly because we don't have 32 or 64 cores to handle that many people on their individual threads :/ alternating which thread the AI goes on would be a smarter idea (AI one goes to thread 1 , the next to thread 2 , the next to thread 3 , the next to thread 4 and then reset back to 1) and much simpler, what I can say however is that getting the threads to communicate with each other is a pain I'm not going to lie and would probably involve a crap tonne of waiting by the main thread and would probably involve the main thread having a function that simply working as a passthrough for the other threads to communicate while it does other things; I'm not a team of 8-16 people so I couldn't do this myself in a reasonable amount of time (by the time I got half way through some AAA title would probably come out with it done) otherwise I'd consider maybe making my own engine from scratch and doing something like that but until then I'll gladly use my single-threaded engines and add loads of multi-threaded code to it ^_^ maybe I'll eventually rack up enough code to force the other threads to be so full of individual side tasks that they're at 100% , sounds like a pipedream but it's possible.

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

Assigning each AI a different thread is silly because we don't have 32 or 64 cores to handle that many people on their individual threads :/ alternating which thread the AI goes on would be a smarter idea (AI one goes to thread 1 , the next to thread 2 , the next to thread 3 , the next to thread 4 and then reset back to 1) and much simpler, what I can say however is that getting the threads to communicate with each other is a pain I'm not going to lie and would probably involve a crap tonne of waiting by the main thread and would probably involve the main thread having a function that simply working as a passthrough for the other threads to communicate while it does other things; I'm not a team of 8-16 people so I couldn't do this myself in a reasonable amount of time (by the time I got half way through some AAA title would probably come out with it done) otherwise I'd consider maybe making my own engine from scratch and doing something like that but until then I'll gladly use my single-threaded engines and add loads of multi-threaded code to it ^_^ maybe I'll eventually rack up enough code to force the other threads to be so full of individual side tasks that they're at 100% , sounds like a pipedream but it's possible.

 

You are wrong. I thought you understood Carmack's keynote correctly. Or maybe i am wrong. But anyway.

 

If you have each AI NPC on different thread the system is managing the threads assigning them to diff. threads as they are available. It doesnt matter if you have quad, hexa or 32-core CPU because it should work the same and simply scale better as we add more cores but still be playable on like normal quadcore (assigned with multiple threads per core). If you get what i mean.

So... If Jesus had the gold, would he buy himself out instead of waiting 3 days for the respawn?

CPU: Phenom II x6 1045t ][ GPU: GeForce 9600GT 512mb DDR3 ][ Motherboard: Gigabyte GA-MA770T-UD3P ][ RAM: 2x4GB Kingston 1333MHz CL9 DDR3 ][ HDD: Western Digital Green 2TB ][ PSU: Chieftec 500AB A ][ Case: No-name without airflow or dust filters Budget saved for an upgrade so far: 2400PLN (600€) - Initial 2800PLN (700€) Upgraded already: CPU

Link to comment
Share on other sites

Link to post
Share on other sites

You are wrong. I thought you understood Carmack's keynote correctly. Or maybe i am wrong. But anyway.

 

If you have each AI NPC on different thread the system is managing the threads assigning them to diff. threads as they are available. It doesnt matter if you have quad, hexa or 32-core CPU because it should work the same and simply scale better as we add more cores but still be playable on like normal quadcore (assigned with multiple threads per core). If you get what i mean.

We're kinda saying the same thing ^_^ I misunderstood you slightly, people have been exchanging core for thread too much ._. and it can work having each of the threads having an AI on them but the problem is you need at least 1 thread for the communication between each thread and in consoles although they only have access to (6 i think you said it was?) cores there's a possibility that the OS itself will do this ^_^ or they'll simply have the thread or they'll have another solution that I'm quite interested to see

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

You are wrong. I thought you understood Carmack's keynote correctly. Or maybe i am wrong. But anyway.

 

If you have each AI NPC on different thread the system is managing the threads assigning them to diff. threads as they are available. It doesnt matter if you have quad, hexa or 32-core CPU because it should work the same and simply scale better as we add more cores but still be playable on like normal quadcore (assigned with multiple threads per core). If you get what i mean.

 

Switching between threads is expensive, you can expect to waste about 30 micro-seconds per thread change (it's not computational expensive, just takes time). That means that if you have your 64 thread application running on a single core cpu you will waste about 2 milliseconds (out of the ~16.6 available @60r15 FPS) just to change between AI threads. That's 12% processing time lost in the worst case scenario, and that's if you have only 64 AI threads and no other threads, in a computer you have a lot of other threads that need to be addressed (all running OS threads, all services, drivers, everything). Most people believe that multithreading is free, but it's actually quite expensive compared to mathematical/logical operations on the CPU. This is on today's hardware, in the past it was even worse! That's probably why Carmack decided against it. It sounds great on paper, but implementing it is harder than it should be.

 

We're kinda saying the same thing ^_^ I misunderstood you slightly, people have been exchanging core for thread too much ._. and it can work having each of the threads having an AI on them but the problem is you need at least 1 thread for the communication between each thread and in consoles although they only have access to (6 i think you said it was?) cores there's a possibility that the OS itself will do this ^_^ or they'll simply have the thread or they'll have another solution that I'm quite interested to see

 

You don't need a channel thread (communication thread) in this case, you can use other methods to keep them in sync. Although the idea to have the right number of threads and just have each solve the problems as fast as they can sounds more effective.

Link to comment
Share on other sites

Link to post
Share on other sites

no, they do not. amd has 8 cores. not 8 threads. threads < cores

 

and stop typing about as if you know anything about computer architecture. nor a games architecture for that matter. i could optimize a game for 16 core dual cpu systems if i wanted.

not really. AMD has 8 threads, and 4 bulldozer cores. Still not an 8 core

Finally my Santa hat doesn't look out of place

Link to comment
Share on other sites

Link to post
Share on other sites

It seems everyone is seeing multi-threading optimisation as this kind of talent that only godly programmers have ._. it's not, get out of your mind the idea that it's hard; the only problem is that it's long to do and extends the time of doing things to about 5-6times as long so when you're a game company paying the hour that is a lot of profit missed out on and as long as people continue to buy the badly optimised games they'll continue to think it's okay to do something like that.

 

The moment you've split the initial operations into 2 threads the rest of it from there is MUCH easier, I can't stress this enough.

I know the actual process isn't hard per say, but each core has to actually do something. You can't just write a line of code and enable 6 cores to be used in a game. Because that is what many people are thinking, and it just isn't true. It takes time to optimize. And people are never going to stop buying a good game because only 2 cores are being worked. That is why I prefer ipc > cores. Until game developers stop being so lazy/ profit hungry, IPC is the only solution.

 

Just kinda had an idea though. Wouldn't it be possible to do something similar to crossfire/sli 2 cores. I know each process takes a unique amount of time to be render, but shouldn't it be doable. Intel could have a standard of what a process is, and as long as the developer goes by it, it could work. Although if you go through all of that trouble, you might as well just have your game optimized for more cores

Finally my Santa hat doesn't look out of place

Link to comment
Share on other sites

Link to post
Share on other sites

not really. AMD has 8 threads, and 4 bulldozer cores. Still not an 8 core

wrong again. there are 8 physical cores on the amd fx 8350. 4 of those cores share fetch and decode units with the other 4. meaning 2 cores per each fetch and decode unit. not a traditional 8 core in the sense, but there are 8 physical logic units, or whatever jargon you want to thrust on them. but amd does not do "threads". threads are nothing more than a facsimile of a core.

Link to comment
Share on other sites

Link to post
Share on other sites

I call bullshit I'm tired of seeing these threads about how Microsoft has imrpoved the Xbox to be better or as good as the ps4. Don't get me wrong I love and own both a ps3 and Xbox now and love them both but this is fucking stupid how they keep adding more and more shit on. How do we know what is really the truth.

My rig: Case: Corsair 760T CPU: Intel 4690k MOBO: MSI Z79 Gaming 5 RaM: 16gb HyperX SSD: 256gb Samsung pro HDD: 1tb Toshiba PSU: Thermaltake smart 750 GPU: 1x GTX 1080 Founders edition

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

wrong again. there are 8 physical cores on the amd fx 8350. 4 of those cores share fetch and decode units with the other 4. meaning 2 cores per each fetch and decode unit. not a traditional 8 core in the sense, but there are 8 physical logic units, or whatever jargon you want to thrust on them. but amd does not do "threads". threads are nothing more than a facsimile of a core.

don't get what is so hard to understand. There are 4 modules. Bulldozer techancally does not have cores. In each module, more of the resources are duplicated, resulting in better average performance, but it's still not really an 8 core chip. C

Finally my Santa hat doesn't look out of place

Link to comment
Share on other sites

Link to post
Share on other sites

You might want to check this thread out : Intel & AMD, Architectural Discussion .

Don't know what you mean, as that article helps my point. here's a quote:

"The building block of AMD's modular architecture is not an x86 core, it's an x86 module, this module contains everything that a core contains but with a number of components doubled, these components include the integer scheduler, its datapath, 16KB of L1 DCache & it's own load/store unit."

Finally my Santa hat doesn't look out of place

Link to comment
Share on other sites

Link to post
Share on other sites

don't get what is so hard to understand. There are 4 modules. Bulldozer techancally does not have cores. In each module, more of the resources are duplicated, resulting in better average performance, but it's still not really an 8 core chip. C

a cpu that doesn't have cores?

 

and what are they to you? threads, or modules? you have to be more specific. you see, i have a plethora of degree's in technology based fields. and you just seem to

Link to comment
Share on other sites

Link to post
Share on other sites

not really. AMD has 8 threads, and 4 bulldozer cores. Still not an 8 core

I know the actual process isn't hard per say, but each core has to actually do something. You can't just write a line of code and enable 6 cores to be used in a game. Because that is what many people are thinking, and it just isn't true. It takes time to optimize. And people are never going to stop buying a good game because only 2 cores are being worked. That is why I prefer ipc > cores. Until game developers stop being so lazy/ profit hungry, IPC is the only solution.

 

Just kinda had an idea though. Wouldn't it be possible to do something similar to crossfire/sli 2 cores. I know each process takes a unique amount of time to be render, but shouldn't it be doable. Intel could have a standard of what a process is, and as long as the developer goes by it, it could work. Although if you go through all of that trouble, you might as well just have your game optimized for more cores

don't get what is so hard to understand. There are 4 modules. Bulldozer techancally does not have cores. In each module, more of the resources are duplicated, resulting in better average performance, but it's still not really an 8 core chip. C

Don't know what you mean, as that article helps my point. here's a quote:

"The building block of AMD's modular architecture is not an x86 core, it's an x86 module, this module contains everything that a core contains but with a number of components doubled, these components include the integer scheduler, its datapath, 16KB of L1 DCache & it's own load/store unit."

Please... STOP!

Read my threads, come back here, apologise. :| You're digging yourself into a hole, ^_^ there's no such thing as a true core btw integer processors were not always part of a cpu core, ever heard of a math co-processor? ^_^ You probably haven't if you're arguing that things that haven't always been part of a core being missing makes something not a core...

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×