Jump to content

Tech Things You Dont Know But Are Too Afraid To Ask.

Wow didnt know about those cards till you said them

 

http://www.nvidia.com/object/quadro-k5000.html#pdpContent=0

 

Amazing technology! But why place PCI-e 3 on consumer mobos? Just put them on workstation mobos then, i just see it as a plan to increase the prices mobos are sold at

Well, yes. It is basically a marketing gimmick.

However, I am not sure just how much a Dual GPU card (Like a 690 or 7990) can put out. They may actually need PCI-e 3.0 x16 to not be bottlenecked by the PCI-e slot.

Not sure though.

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

How do intruction sets work, I've been trying to learn these complicated little things but either they are difficult or I'm too stupid to understand, I think someone said that the SSE or Steaming SIMD Extensions are used for math decoding or something but all of the tech speak regarding those to me are too difficult to grasp.

not sure about SSE but I can give you the basics! An instruction has two parts, an operator (command) and an operand (address/data) The operator acts on the operand to produce a result which is stored in the accumilator!

Link to comment
Share on other sites

Link to post
Share on other sites

How do intruction sets work, I've been trying to learn these complicated little things but either they are difficult or I'm too stupid to understand, I think someone said that the SSE or Steaming SIMD Extensions are used for math decoding or something but all of the tech speak regarding those to me are too difficult to grasp.

Instruction sets are exactly what their name suggests: sets of instructions. There isn't anything magical about them: an instruction set might be, for example, a set of instructions that are related to AES encryption (AES-NI). If a processor ships with this instruction set enabled, it may perform tasks related to AES encryption faster then a processor that does not have that instruction set, because the first processor can do things in one instruction where the second processor would potentially need to combine multiple instructions.

 

Let me explain this a bit further with an example.

Picture a hypothetical CPU which has an instruction set consisting of addition (+) and multiplication (*). Say, you want to perform the calculation of 3+2*5. Your program would then first calculate 2*5, store the result (10) temporarily and then perform 3+10, the result of which (13) it would then save in a variable. This poses no problem as all the things you want to do (+ and *) lie within the instruction set of the CPU.

Now consider you want to do perform 3+2^5. This poses a problem. You can not use your CPU to perform this exponentiation directly. The CPU does not have ^ in its instruction set. Now, you would need to write a program which performs 2*2*2*2*2, each time storing the intermediate result and afterwards add 3,  then storing the end result in some variable. This is rather a lot of work.

Now picture a second hypothetical CPU which has not only addition (+) and multiplication (*) in its instruction set, but also exponentiation (^). Using this CPU, you can rewrite your second calculation program to something that simply performs 2^5, stores the result (32) temporarily and then adds 3, afterwards storing the end result (35) in some variable. This, obviously, makes your program a lot shorter (and thus faster).

 

The difference between both processors is that the second one has hardware to perform exponentiation, thus is able to perform that very quick.

 

This is also called CISC vs. RISC. CISC stands for Complex Instruction Set Computing and is a design methodology followed -in part- by x86 processors. It stands on the principle explained above: if your processor has more instructions it can perform with dedicated hardware (and thus quick), your processor will be quicker. So x86 has lots of instruction sets it can access and execute and is a very complex architecture.

RISC stands for Reduced Instruction Set Computing and is a design methodology followed -in part- by ARM processors. It stands on the principle that a processor may perform better if it has less instructions, but it can perform thos instructions quicker becasue the processor has a simpler architecture. ARM processors have relatively "simple" architecture and has less instructions it supports, but can execute these instructions very fast.

 

As always, the truth lies somewhere in the middle. The more instructions your CPU spports, the longer the instructions take to execute, but the less instructions a program need to complete. If your CPU supports a lot of instructions, it might be very efficient for very complex programs (thing medical and advanced physics), but in everyday consumer usage (think tablets) a lot of these complex and advanced instructions might go unused and you might be better of have less instructions, so the instructions you use often can be accessed faster. If your RISC processor does need to perform ccomplex operations once in a while, you may combine different simple instructions to "build" a complex one (as in the example above, using a multitude of * to "build" a ^).

 

Hope this doesn't sound all too difficult, because it isn't really ;)

Link to comment
Share on other sites

Link to post
Share on other sites

Question:

In CPU's how does the memory work? Like, L1, L2, and L3 memory.

For example, in an FX8320, each core has dedicated L1 memory, shares L2 memory with 1 other core, and shares L3 memory with all other cores.

How does that work? Is there a way that a CPU could be bottlenecked by a single core using all the L3 memory? Or is it fixed to a certain amount even though it's shared (thus defeating the purpose)?

Another example, if I had 4 cores dedicated to gaming and 4 other cores dedicated to video rendering, while doing both at the same time, would I notice a difference between doing that, and just using 4 cores to game leaving the other 4 cores alone?

Could the other 4 cores hog all the L3 memory hindering the other 4 cores?

Sorry, I know that's a lot of questions, but it's more so about one topic that was brought up in another thread and I'm curious. 

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Question:

In CPU's how does the memory work? Like, L1, L2, and L3 memory.

For example, in an FX8320, each core has dedicated L1 memory, shares L2 memory with 1 other core, and shares L3 memory with all other cores.

How does that work? Is there a way that a CPU could be bottlenecked by a single core using all the L3 memory? Or is it fixed to a certain amount even though it's shared (thus defeating the purpose)?

Another example, if I had 4 cores dedicated to gaming and 4 other cores dedicated to video rendering, while doing both at the same time, would I notice a difference between doing that, and just using 4 cores to game leaving the other 4 cores alone?

Could the other 4 cores hog all the L3 memory hindering the other 4 cores?

Sorry, I know that's a lot of questions, but it's more so about one topic that was brought up in another thread and I'm curious. 

I can't give you an exact answer to every question, but what I do know is this:

 

L1 cache is cache dedicated per core, as you mentioned. This means that every core always has cache (and the fastest type of it) available.

L2 and L3 cache is shared between multiple cores and is used for -among other things- communication between cores. For example: if core 1 and core 2 share L2 cache and core 2 needs to perform a calculation based on a result of a calculation done by core 1, core 1 can simply put that result in L2 cache. Core 2 can then access the needed information by reading L2. This is a way for cores to communicate very fast with eachother.

 

I think that with smart cache management,every core would get a piece of the cache, but theoretically it should be possible that L3 cache is filled with data needed by only one core. Of course, with clever cache management, such an event only occurs when that single core is the only core that is doing any work. If you study how things get put in cache, you see that it's a very dynamic process of "whatever I just used stays in cache, replacing the thing that is not accessed for the longest time". So, every time a core accesses a piece of data, it overwrites a piece of other data, potentially used by another core. If that second core needs that data again, it requests it from RAM and overwrites something else. As you can see, with such a system, the chance of having the cache full of data from only one core is pretty small.

 

If anyone has more detailed information about this, please, do share!

Link to comment
Share on other sites

Link to post
Share on other sites

I can't give you an exact answer to every question, but what I do know is this:

 

L1 cache is cache dedicated per core, as you mentioned. This means that every core always has cache (and the fastest type of it) available.

L2 and L3 cache is shared between multiple cores and is used for -among other things- communication between cores. For example: if core 1 and core 2 share L2 cache and core 2 needs to perform a calculation based on a result of a calculation done by core 1, core 1 can simply put that result in L2 cache. Core 2 can then access the needed information by reading L2. This is a way for cores to communicate very fast with eachother.

 

I think that with smart cache management,every core would get a piece of the cache, but theoretically it should be possible that L3 cache is filled with data needed by only one core. Of course, with clever cache management, such an event only occurs when that single core is the only core that is doing any work. If you study how things get put in cache, you see that it's a very dynamic process of "whatever I just used stays in cache, replacing the thing that is not accessed for the longest time". So, every time a core accesses a piece of data, it overwrites a piece of other data, potentially used by another core. If that second core needs that data again, it requests it from RAM and overwrites something else. As you can see, with such a system, the chance of having the cache full of data from only one core is pretty small.

 

If anyone has more detailed information about this, please, do share!

Well, considering this, in my suggested example, where 4 cores are working on Video rendering and the other 4 are working on gaming, speed and latency of RAM could be the bottleneck in that system.

Theoretically at least.

Interesting. A situation when RAM speed and latency matters. :P

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Well, considering this, in my suggested example, where 4 cores are working on Video rendering and the other 4 are working on gaming, speed and latency of RAM could be the bottleneck in that system.

Theoretically at least.

Interesting. A situation when RAM speed and latency matters. :P

Well, considering the throughput of RAM: say you have 4 cores, each capable of processing 64bit/cycle, operating at 3.5GHz. This would give you a total throughput of 280GB/s. This, of course, is a completely unlrealistic scenario that would never ever be used in the real world. I think that, for most applications and use cases, 25.6GB/s (bandwidth of dual channel, 1600MHz DDR3) should be sufficient.

 

About latency: that's a point that a lot of people forget. They always want "the fastest RAM" and by things like DDR3 OVER NINE THOUSAND GIGAHURTZ, but what a lot of people don't realise is that upping the clock speed often means upping the latency as well (look at the figures for CAS latency). Latency, however, is what actually defines the speed your system can randomly request data from the RAM.

Link to comment
Share on other sites

Link to post
Share on other sites

Well, considering the throughput of RAM: say you have 4 cores, each capable of processing 64bit/cycle, operating at 3.5GHz. This would give you a total throughput of 280GB/s. This, of course, is a completely unlrealistic scenario that would never ever be used in the real world. I think that, for most applications and use cases, 25.6GB/s (bandwidth of dual channel, 1600MHz DDR3) should be sufficient.

 

About latency: that's a point that a lot of people forget. They always want "the fastest RAM" and by things like DDR3 OVER NINE THOUSAND GIGAHURTZ, but what a lot of people don't realise is that upping the clock speed often means upping the latency as well (look at the figures for CAS latency). Latency, however, is what actually defines the speed your system can randomly request data from the RAM.

Well that's sort of what I'm saying.

Although the CPU might be pulling 51.2 GB/s if there were no latency (8 cores in total, so doubled), the latency means the actual amount would either be different or take longer to get. 

I may have this completely wrong though. 

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Well that's sort of what I'm saying.

Although the CPU might be pulling 51.2 GB/s if there were no latency (8 cores in total, so doubled), the latency means the actual amount would either be different or take longer to get. 

I may have this completely wrong though. 

The 51.3GB/s would be the maximum throughput of quad channel 1600MHz DD3 ;)

 

And yes, you are very right! If your latency goes up the actual time from requesting to receiving the data goes up. That doesn't really matter if you want to fill up 10GB of memory, a couple nanoseconds more or less won't be noticeable. But when your CPU needs to do multiple calculations on different sets of numbers which all need to be requested from RAM, then you would get: request set 1, wait 12ns, receive set 1, perform calculation, request set 2, wait 12ns, receive set 2, perform calculation, reque.... Then, all those 12ns delays really add up.

 

Of course, CPU manufacterers counter this by prefetching instructions. Prefetching is a technique that is based on the idea that if you need a particular instruction, the chances are you will need the next couple of instructions as well in the near future. So the CPU always fetches x amount of instructions after the current instruction, so they are ready and waiting in L1/2/3 cache when the CPU is ready with the current instruction. This way, the delays can be lowered as you are constantly requesting blocks of data instead of 1 instruction at a time.

Link to comment
Share on other sites

Link to post
Share on other sites

not sure about SSE but I can give you the basics! An instruction has two parts, an operator (command) and an operand (address/data) The operator acts on the operand to produce a result which is stored in the accumilator!

 

Kind of make sense but still confused lol

DESKTOP - Motherboard - Gigabyte GA-Z77X-D3H Processor - Intel Core i5-2500K @ Stock 1.135v Cooling - Cooler Master Hyper TX3 RAM - Kingston Hyper-X Fury White 4x4GB DDR3-1866 Graphics Card - MSI GeForce GTX 780 Lightning PSU - Seasonic M12II EVO Edition 850w  HDD -  WD Caviar  Blue 500GB (Boot Drive)  /  WD Scorpio Black 750GB (Games Storage) / WD Green 2TB (Main Storage) Case - Cooler Master 335U Elite OS - Microsoft Windows 7 Ultimate

Link to comment
Share on other sites

Link to post
Share on other sites

Instruction sets are exactly what their name suggests: sets of instructions. There isn't anything magical about them: an instruction set might be, for example, a set of instructions that are related to AES encryption (AES-NI). If a processor ships with this instruction set enabled, it may perform tasks related to AES encryption faster then a processor that does not have that instruction set, because the first processor can do things in one instruction where the second processor would potentially need to combine multiple instructions.

 

Let me explain this a bit further with an example.

Picture a hypothetical CPU which has an instruction set consisting of addition (+) and multiplication (*). Say, you want to perform the calculation of 3+2*5. Your program would then first calculate 2*5, store the result (10) temporarily and then perform 3+10, the result of which (13) it would then save in a variable. This poses no problem as all the things you want to do (+ and *) lie within the instruction set of the CPU.

Now consider you want to do perform 3+2^5. This poses a problem. You can not use your CPU to perform this exponentiation directly. The CPU does not have ^ in its instruction set. Now, you would need to write a program which performs 2*2*2*2*2, each time storing the intermediate result and afterwards add 3,  then storing the end result in some variable. This is rather a lot of work.

Now picture a second hypothetical CPU which has not only addition (+) and multiplication (*) in its instruction set, but also exponentiation (^). Using this CPU, you can rewrite your second calculation program to something that simply performs 2^5, stores the result (32) temporarily and then adds 3, afterwards storing the end result (35) in some variable. This, obviously, makes your program a lot shorter (and thus faster).

 

The difference between both processors is that the second one has hardware to perform exponentiation, thus is able to perform that very quick.

 

This is also called CISC vs. RISC. CISC stands for Complex Instruction Set Computing and is a design methodology followed -in part- by x86 processors. It stands on the principle explained above: if your processor has more instructions it can perform with dedicated hardware (and thus quick), your processor will be quicker. So x86 has lots of instruction sets it can access and execute and is a very complex architecture.

RISC stands for Reduced Instruction Set Computing and is a design methodology followed -in part- by ARM processors. It stands on the principle that a processor may perform better if it has less instructions, but it can perform thos instructions quicker becasue the processor has a simpler architecture. ARM processors have relatively "simple" architecture and has less instructions it supports, but can execute these instructions very fast.

 

As always, the truth lies somewhere in the middle. The more instructions your CPU spports, the longer the instructions take to execute, but the less instructions a program need to complete. If your CPU supports a lot of instructions, it might be very efficient for very complex programs (thing medical and advanced physics), but in everyday consumer usage (think tablets) a lot of these complex and advanced instructions might go unused and you might be better of have less instructions, so the instructions you use often can be accessed faster. If your RISC processor does need to perform ccomplex operations once in a while, you may combine different simple instructions to "build" a complex one (as in the example above, using a multitude of * to "build" a ^).

 

Hope this doesn't sound all too difficult, because it isn't really ;)

 

That is an awesome amount of detail and I bestow you one like for it but still pretty complicated, I'm just stupid I guess lol

DESKTOP - Motherboard - Gigabyte GA-Z77X-D3H Processor - Intel Core i5-2500K @ Stock 1.135v Cooling - Cooler Master Hyper TX3 RAM - Kingston Hyper-X Fury White 4x4GB DDR3-1866 Graphics Card - MSI GeForce GTX 780 Lightning PSU - Seasonic M12II EVO Edition 850w  HDD -  WD Caviar  Blue 500GB (Boot Drive)  /  WD Scorpio Black 750GB (Games Storage) / WD Green 2TB (Main Storage) Case - Cooler Master 335U Elite OS - Microsoft Windows 7 Ultimate

Link to comment
Share on other sites

Link to post
Share on other sites

 I'm just stupid I guess

Most certainly not! I get courses like computer architecture in my education (going for Industrial Engineer in Electronics and ICT), that's why I know a thing or two about this stuff.

 

If there are things that need more clarification, I am willing to try :)

Link to comment
Share on other sites

Link to post
Share on other sites

Most certainly not! I get courses like computer architecture in my education (going for Industrial Engineer in Electronics and ICT), that's why I know a thing or two about this stuff.

 

If there are things that need more clarification, I am willing to try :)

 

wow that is so cool :)

 

I find components on an architectural level quite facinating but damn tricky to get my head around, now cars and aircraft just seems to flow effortlessly almost but this computer type kind of thing takes quite a little effort.

 

We've done complicated things but it's always been in my area of knowledge like when we was modifying a 550bhp 2.0L 2nd gen Toyota MR-2 race car with a turbo-supercharger setup (using the supercharger as an anti-lag device to well reduce lag until the turbo spooled up), we set it up sequentially so that the supercharger would disengage once the turbocharger achieved sufficient boost without the car losing speed or losing as little speed as possible (and worked a charm) :P

 

Wow that was a little bit of a tangeant again, anyway I've lost my train of thought.

DESKTOP - Motherboard - Gigabyte GA-Z77X-D3H Processor - Intel Core i5-2500K @ Stock 1.135v Cooling - Cooler Master Hyper TX3 RAM - Kingston Hyper-X Fury White 4x4GB DDR3-1866 Graphics Card - MSI GeForce GTX 780 Lightning PSU - Seasonic M12II EVO Edition 850w  HDD -  WD Caviar  Blue 500GB (Boot Drive)  /  WD Scorpio Black 750GB (Games Storage) / WD Green 2TB (Main Storage) Case - Cooler Master 335U Elite OS - Microsoft Windows 7 Ultimate

Link to comment
Share on other sites

Link to post
Share on other sites

We've done complicated things but it's always been in my area of knowledge like when we was modifying a 550bhp 2.0L 2nd gen Toyota MR-2 race car with a turbo-supercharger setup (using the supercharger as an anti-lag device to well reduce lag until the turbo spooled up), we set it up sequentially so that the supercharger would disengage once the turbocharger achieved sufficient boost without the car losing speed or losing as little speed as possible (and worked a charm) :P

Sounds awesome! happen to have any pics, or -better yet- a video? Im'm quite into engines myself, but I fail to see why you would disengage the supercharger? Is there a benefit from disengaging it?

Link to comment
Share on other sites

Link to post
Share on other sites

MG2R, on 04 Jul 2013 - 12:23 AM, said:

Sounds awesome! happen to have any pics, or -better yet- a video? Im'm quite into engines myself, but I fail to see why you would disengage the supercharger? Is there a benefit from disengaging it?

We did have pics but can't find them, the car wasn't ours (damn I wish).

we disengaged it because the turbocharger provided the engine with all the boost it needs but it was a damn big turbocharger, and with bigger compressors well it takes more time to get up to speed and provide the boost hense why bigger turbos produce more lag than smaller ones, the supercharger was there to provide boost while the turbocharger was off the boost, because the turbo gave us all the power we needed there wasn't any point in having the supercharger running (I'm sure you know how they work) sapping the engines power.

I think that makes sense, I know what I'm talking about but I've never been good at explaining it lol

The engine was designed to take around 640bhp but at 550bhp was more than quick enough, also the race marshalls wouldn't allow the boost to be turned all the way up.

DESKTOP - Motherboard - Gigabyte GA-Z77X-D3H Processor - Intel Core i5-2500K @ Stock 1.135v Cooling - Cooler Master Hyper TX3 RAM - Kingston Hyper-X Fury White 4x4GB DDR3-1866 Graphics Card - MSI GeForce GTX 780 Lightning PSU - Seasonic M12II EVO Edition 850w  HDD -  WD Caviar  Blue 500GB (Boot Drive)  /  WD Scorpio Black 750GB (Games Storage) / WD Green 2TB (Main Storage) Case - Cooler Master 335U Elite OS - Microsoft Windows 7 Ultimate

Link to comment
Share on other sites

Link to post
Share on other sites

because the turbo gave us all the power we needed there wasn't any point in having the supercharger running (I'm sure you know how they work) sapping the engines power.

So at higher RPMs, the smaller supercharger actually needs more power then it delivers? Never knew that. Learned something new today :)

Link to comment
Share on other sites

Link to post
Share on other sites

So at higher RPMs, the smaller supercharger actually needs more power then it delivers? Never knew that. Learned something new today :)

 

At higher RPM's having both chargers running would have given too much boost, we learned that the hard way because it blew the first engine (I think that was in the region of 660bhp).

 

Any supercharger regardless of size gives more power than it takes but once the turbo came in the supercharger wasn't needed, disengaging it stops it running all together, it was just for anti-lag reasons why it was there in the first place :)

 

Hope I haven't given any wrong info, this is difficult lol, I don't think I'll be a professor any time soon :P

 

 

Skip to around 3:02 when Clarkson drops a gear and floors it, the MR2's supercharger sounded just little like that until the turbo takes over, the sound was incredible.

DESKTOP - Motherboard - Gigabyte GA-Z77X-D3H Processor - Intel Core i5-2500K @ Stock 1.135v Cooling - Cooler Master Hyper TX3 RAM - Kingston Hyper-X Fury White 4x4GB DDR3-1866 Graphics Card - MSI GeForce GTX 780 Lightning PSU - Seasonic M12II EVO Edition 850w  HDD -  WD Caviar  Blue 500GB (Boot Drive)  /  WD Scorpio Black 750GB (Games Storage) / WD Green 2TB (Main Storage) Case - Cooler Master 335U Elite OS - Microsoft Windows 7 Ultimate

Link to comment
Share on other sites

Link to post
Share on other sites

At higher RPM's having both chargers running would have given too much boost, we learned that the hard way because it blew the first engine (I think that was in the region of 660bhp).

 

Any supercharger regardless of size gives more power than it takes but once the turbo came in the supercharger wasn't needed, disengaging it stops it running all together, it was just for anti-lag reasons why it was there in the first place :)

 

Hope I haven't given any wrong info, this is difficult lol, I don't think I'll be a professor any time soon :P

 

 

Skip to around 3:02 when Clarkson drops a gear and floors it, the MR2's supercharger sounded just little like that until the turbo takes over, the sound was incredible.

Damn, would've loved a video of that beast :D

 

Thanks for the explanation!

Link to comment
Share on other sites

Link to post
Share on other sites

Damn, would've loved a video of that beast :D

 

Thanks for the explanation!

 

No worries, I'll see if I can find any info, we've not seen the owner or the car for a few years but she was a beast, quite a few people said that 550bhp reliably from a 2.0L wasn't possible, pah what do they know lol

DESKTOP - Motherboard - Gigabyte GA-Z77X-D3H Processor - Intel Core i5-2500K @ Stock 1.135v Cooling - Cooler Master Hyper TX3 RAM - Kingston Hyper-X Fury White 4x4GB DDR3-1866 Graphics Card - MSI GeForce GTX 780 Lightning PSU - Seasonic M12II EVO Edition 850w  HDD -  WD Caviar  Blue 500GB (Boot Drive)  /  WD Scorpio Black 750GB (Games Storage) / WD Green 2TB (Main Storage) Case - Cooler Master 335U Elite OS - Microsoft Windows 7 Ultimate

Link to comment
Share on other sites

Link to post
Share on other sites

Does the GPU or CPU give you higher FPS whilst recording?

SPECS: MOBO - INTEL DZ77GA70K EXTEME, CPU - i5 3570 @ 3.4GHz, MEMORY - CORSAIR VENGEANCE 2 x 4G, GPU - GIGABYTE HD7970 OC, HDD - WESTERN DIGITAL CAVIAR BLACK 1TB, SDD - INTEL 520 SERIES 120GB, CASE - COOLERMASTER RC-682A-KWN5, PSU - THERMALTAKE SMART 750W, KEYBOARD - RAZER BLACKWIDOW ULTIMATE 2013, MOUSE - RAZER DEATHERADDER 2013.  :D

Link to comment
Share on other sites

Link to post
Share on other sites

Does the GPU or CPU give you higher FPS whilst recording?

 

Err not sure but I'd imagine if anything it'll drop

DESKTOP - Motherboard - Gigabyte GA-Z77X-D3H Processor - Intel Core i5-2500K @ Stock 1.135v Cooling - Cooler Master Hyper TX3 RAM - Kingston Hyper-X Fury White 4x4GB DDR3-1866 Graphics Card - MSI GeForce GTX 780 Lightning PSU - Seasonic M12II EVO Edition 850w  HDD -  WD Caviar  Blue 500GB (Boot Drive)  /  WD Scorpio Black 750GB (Games Storage) / WD Green 2TB (Main Storage) Case - Cooler Master 335U Elite OS - Microsoft Windows 7 Ultimate

Link to comment
Share on other sites

Link to post
Share on other sites

Does the GPU or CPU give you higher FPS whilst recording?

Depends on which is doing the encoding.

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

Why do GPU memory clock speeds always change? For example GPU-Z says my 680 has 1502MHz memory speed whereas EVGA precision says 6008MHz. I see this with specs when listed on websites, sometimes they say the memory clock speed is like 1200MHz when it is actually 5000MHz.

 

I don't get it. 

Link to comment
Share on other sites

Link to post
Share on other sites

Why do GPU memory clock speeds always change? For example GPU-Z says my 680 has 1502MHz memory speed whereas EVGA precision says 6008MHz. I see this with specs when listed on websites, sometimes they say the memory clock speed is like 1200MHz when it is actually 5000MHz.

 

I don't get it. 

One will be memory clock speed. The other will be core clock speed. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×