Jump to content

New Razer Blade or MacBook Pro

rf9661
1 hour ago, shippage said:

I mean I don't see any software being on better on a mac. I have a MacBook and just threw BootCamp on it and haven't booted up into OSX in about a month. Windows on a MacBook runs alright, not as good as it does on OSX. But with OSX things are also much more poorly optimized so you don't get full performance there either.

 

But, my computing habits are different than yours. So I would say it comes down to what OS you think you'd be using majority of the time. If OSX, then go Macbook. If Windows, go Razor.

 

I'm currently waiting to get the new Razor Stealth too.

The new Razer Stealth was already released.....


But, Ill be using macOS most of the time for overall tasks but prefer Windows computers for gaming....

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, zelix said:

 

i have  a rmbp 15" 2.5ghz and 750m, playing WoW or any blizzard products i don't think i throttle or nonetheless receive any hit in performance.

you need to push the 15inch further to get it to throttle, because it has decent cooling, but it is still a thing that can happen

Before you buy amp and dac.  My thoughts on the M50x  Ultimate Ears Reference monitor review I might have a thing for audio...

My main Headphones and IEMs:  K612 pro, HD 25 and Ultimate Ears Reference Monitor, HD 580 with HD 600 grills

DAC and AMP: RME ADI 2 DAC

Speakers: Genelec 8040, System Audio SA205

Receiver: Denon AVR-1612

Desktop: R7 1700, GTX 1080  RX 580 8GB and other stuff

Laptop: ThinkPad P50: i7 6820HQ, M2000M. ThinkPad T420s: i7 2640M, NVS 4200M

Feel free to pm me if you have a question for me or quote me. If you want to hear what I have to say about something just tag me.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, GR8-Ride said:

Except that in your example, you're concentrating the same amount of energy into a smaller space.   If the 970M is physically larger in package size than the 1060, then this example would stand up.

 

However I doubt that the package size is different, given that all of these manufacturers were able to basically drop GTX 1060 GPUs into laptops with previous GTX 970M modules in them.  This tends to indicate that no major re-engineering of the motherboards was required, as laptops were shipping VERY rapidly after the 1060/1070/1080 Mobility announcements were made.   If motherboards had to be re-engineered, I would imagine that each of the companies would have waited until Kaby Lake processors were available and started from there.

 

Now it could be pin compatible with a much smaller actual die size, which might make sense.   The 1060 has 4.4B transistors vs the 5.2B of the 970/980M units.

 

 

 

Patrick

my example was to show him that just because something uses the same power, it doesn't mean that they get to the same temps, just to do something basic, that showed him in a simple way that power=/= heat, the only way power=heat is if everything was the same, meaning that you have a 970m using 60w and another one is using 90w, then the 90w 970m is hotter, but we can only say this because everything else is the same. Also the 1060 is a smaller chip, the whole line up is a smaller chip from what I have seen, also look at the reference cooler, it has gotten a huge upgrade with a vapor chamber, yet they run hotter than last gen, that had a worse cooler. The smaller die and the fact that it is basically Maxwell with a die shrink running faster all equals to a hotter chip.

Before you buy amp and dac.  My thoughts on the M50x  Ultimate Ears Reference monitor review I might have a thing for audio...

My main Headphones and IEMs:  K612 pro, HD 25 and Ultimate Ears Reference Monitor, HD 580 with HD 600 grills

DAC and AMP: RME ADI 2 DAC

Speakers: Genelec 8040, System Audio SA205

Receiver: Denon AVR-1612

Desktop: R7 1700, GTX 1080  RX 580 8GB and other stuff

Laptop: ThinkPad P50: i7 6820HQ, M2000M. ThinkPad T420s: i7 2640M, NVS 4200M

Feel free to pm me if you have a question for me or quote me. If you want to hear what I have to say about something just tag me.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, GR8-Ride said:

snip

U need to take a thermal density course too. Total energy generated directly corresponds to watt consumption, yes this is true. However, dissipation of said energy is significantly different depending on the die size of the chip. Hence thermal density. It's MUCH harder to dissipate highly focused heat (pascal) than to dissipate less focused heat (maxwell). Hence you see higher temps in all pascal chips due to the architecture. 

 

It's like sunlight. Unfocused it doesn't do anything. Put a magnifying glass through it and although total energy that passes through said area remains the same, end result is burn marks on the ground. 

6 hours ago, GR8-Ride said:

Everything I've seen on the GTX 970M is that it had a TDP of 100W, and everything I've seen on the 1060 Mobility version is that it's expected to be 100W TDP or less (I've seen reports as low as 80W TDP, though NVidia hasn't confirmed anything thus far).

Tons of pascal laptop results have been posted on notebookcheck. Feel free to check up. It shows Pascal is a much more efficient architecture consuming less power during gaming loads, more power during max load, and much higher temps. 

 

5 hours ago, djdwosk97 said:

If the Macs are Skylake-based, then it will be HD580

No laptop has been released with the consumer version of the P580 graphics. Don't think it's going to change. Only the most expensive Xeon Chips have the P580, but all of them already have Quadro graphics which make it meh. 

 

5 hours ago, djdwosk97 said:

Late 2013 -- basically the same as the currently available one. If you wait until the new ones are released (probably in October), then you'd be getting a Skylake or Kaby Lake based Mac.

My mid 2012 first gen retina one is essentially the same as the current gen one if you don't plan to play games. Ivy-Bridge quad core performance on OSX is almost indistinguishable from Haswell quad-core unless you're doing heavier tasks.

Laptop Main

(Retired) Zbook 15: i7-6820HQ, M2000M, 32gb, 512gb SSD + 2tb HDD, 4k Dreamcolor

(Retired) Alienware 15 R3: i7-6820HK, GTX1070, 16gb, 512 SSD + 1tb HDD, 1080p

(Retired) T560: i7-6600U, HD520, 16gb, 512gb SSD, 1620p

(Retired) P650RS: i7-6820HK, 1070, 16gb, 512gb + 1tb HDD, 4k Samsung PLS

(Retired) MBP 2012 Retina: i7-3820QM, GT650M, 16gb, 512gb SSD, 1800p

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Pendragon said:

No laptop has been released with the consumer version of the P580 graphics. Don't think it's going to change. Only the most expensive Xeon Chips have the P580, but all of them already have Quadro graphics which make it meh. 

 

My mid 2012 first gen retina one is essentially the same as the current gen one if you don't plan to play games.

 

Ivy-Bridge quad core performance on OSX is almost indistinguishable from Haswell quad-core unless you're doing heavier tasks.

And when Haswell launched the 13/15" Macs were basically the only laptops to use Iris/Iris Pro. The one or two others that used Haswell Iris/Iris Pro didn't enter the market until much later on. There are currently three i7 SKUs that have Iris Pro, and if I had to guess the base model 15" Pro will have a 6770HQ. 

 

Even in games it should be relatively similar -- HD5200 is fairly close to a 650m iirc. 

 

And the same will be true with the Skylake rMBPs -- as has been the case with basically all laptops since Sandy Bridge. Performance gains are basically non-existant 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, djdwosk97 said:

And when Haswell launched the 13/15" Macs were basically the only laptops to use Iris/Iris Pro. The one or two others that used Haswell Iris/Iris Pro didn't enter the market until much later on. There are currently three i7 SKUs that have Iris Pro, and if I had to guess the base model 15" Pro will have a 6770HQ. 

 

Did the base model include the GT650m? And the same will be true with the Skylake rMBPs -- as has been the case with basically all laptops since Sandy Bridge. Performance gains are basically non-existant 

Lets hope so. The P580 would be really really nice on a quad core Macbook Pro 15in.

 

Base model for my generation had GT650m. But that was the only generation that did that. All future generation started making a low end 15in model without a dGPU. And the dGPU while bad was still better than the iGPUs that the MBP's came with. For my gen it was Ivy Bridge, so the GT650m was better. Next gen was GT750m, still better. This current gen is the AMD one which is still better than the iGPU. Performance gains are there. 

Laptop Main

(Retired) Zbook 15: i7-6820HQ, M2000M, 32gb, 512gb SSD + 2tb HDD, 4k Dreamcolor

(Retired) Alienware 15 R3: i7-6820HK, GTX1070, 16gb, 512 SSD + 1tb HDD, 1080p

(Retired) T560: i7-6600U, HD520, 16gb, 512gb SSD, 1620p

(Retired) P650RS: i7-6820HK, 1070, 16gb, 512gb + 1tb HDD, 4k Samsung PLS

(Retired) MBP 2012 Retina: i7-3820QM, GT650M, 16gb, 512gb SSD, 1800p

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Pendragon said:

Lets hope so. The P580 would be really really nice on a quad core Macbook Pro 15in.

 

Base model for my generation had GT650m. But that was the only generation that did that. All future generation started making a low end 15in model without a dGPU. And the dGPU while bad was still better than the iGPUs that the MBP's came with. For my gen it was Ivy Bridge, so the GT650m was better. Next gen was GT750m, still better. This current gen is the AMD one which is still better than the iGPU. Performance gains are there. 

I know. I opt'd for the low end Mac because I didn't want the extra power draw of a dGPU since I wanted the battery life more than the gaming performance. Like I said, I'm expecting the higher end model to have an M470. But I would still personally go for Iris Pro route.

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, rf9661 said:

The new Razer Stealth was already released.....


But, Ill be using macOS most of the time for overall tasks but prefer Windows computers for gaming....

 

Unless Apple offers some sort of eGPU offering via Thunderbolt 3 or at least adds an Geforce 1060 (i need tha CUDA), I'm personally gonna go with a Razer Blade or equivalent laptop with Skylake + Pascal. 

 

But for the use cases you've stated, you're probably fine with either option. 

 

If you really like MacOS or have some Mac-only software you use, pray to the gods that Apple announces an rMBP refresh tomorrow (9/7), or at least in October. If you're comfortable moving to Windows 10 for all work (not just gaming), it's a no brainer - get the the Razer Blade. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, imaginfinity said:

I'm personally gonna go with a Razer Blade or equivalent laptop with Skylake + Pascal. 

Have fun with a 2.4ghz cpu and 800mhz gpu. 

 

8 minutes ago, imaginfinity said:

Unless Apple offers some sort of eGPU offering via Thunderbolt 3 or at least adds an Geforce 1060 (i need tha CUDA)

You can use the Razer Core with just about anything with TB3. Apple new MBP will have TB3. You can probably jury rig it to work with any eGPU TB3 enclosure. 

Laptop Main

(Retired) Zbook 15: i7-6820HQ, M2000M, 32gb, 512gb SSD + 2tb HDD, 4k Dreamcolor

(Retired) Alienware 15 R3: i7-6820HK, GTX1070, 16gb, 512 SSD + 1tb HDD, 1080p

(Retired) T560: i7-6600U, HD520, 16gb, 512gb SSD, 1620p

(Retired) P650RS: i7-6820HK, 1070, 16gb, 512gb + 1tb HDD, 4k Samsung PLS

(Retired) MBP 2012 Retina: i7-3820QM, GT650M, 16gb, 512gb SSD, 1800p

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Pendragon said:

Have fun with a 2.4ghz cpu and 800mhz gpu. 

Isn't the Razer Blade sporting a Core i7-6700HQ Quad-Core Processor (2.6GHz / 3.5GHz) and a GTX 1060, unless i'm missing something?

13 minutes ago, Pendragon said:

You can use the Razer Core with just about anything with TB3. Apple new MBP will have TB3. You can probably jury rig it to work with any eGPU TB3 enclosure. 

While it is true you can use the Core with ANY computer with TB3, there's a noticable loss in performance in most cases, even if you spend your time jerry-rigging it and trying out a bunch of BIOS firmwares and drivers. Personally, it'd suck to drop so much on the enclosure + GPU to lose more than 10-15% performance, which is what happens anyway in an ideal situation i.e. Core with an officially supported Razer Blade. Hopefully this changes in the future, but this is the current state. 

 

There are murmurs about Apple's 5K display with AMD gpu's in em. Hopefully there's a similar nVidia offering, because I definitely need that CUDA.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, imaginfinity said:

snip

You are missing something. It's going to throttle so hard because it's going to be so hot that it's going to kill everything inside, battery, capacitors. 

Here's a video of your new Razer Blade running a non-sustained fresh run of Overwatch (low load game), at 95C. https://www.youtube.com/watch?v=X_ZTSMTYdfU

Congrats, you would of bought a 2000 dollar furnance

 

5 minutes ago, imaginfinity said:

While it is true you can use the Core with ANY computer with TB3, there's a noticable loss in performance in most cases, even if you spend your time jerry-rigging it and trying out a bunch of BIOS firmwares and drivers. Personally, it'd suck to drop so much on the enclosure + GPU to lose more than 10-15% performance, which is what happens anyway in an ideal situation i.e. Core with an officially supported Razer Blade. Hopefully this changes in the future, but this is the current state. 

There's a 25 performance loss on the core itself with the whatever laptop compared to a desktop. The performance loss has mostly been resolved on other laptops such as the XPS 9550 with the latest tests showing almost identical scores on the XPS15 and Razer Blade. You can find those results on NBR on a thread dedicated to Razer Core performance investigation. 

 

 

Laptop Main

(Retired) Zbook 15: i7-6820HQ, M2000M, 32gb, 512gb SSD + 2tb HDD, 4k Dreamcolor

(Retired) Alienware 15 R3: i7-6820HK, GTX1070, 16gb, 512 SSD + 1tb HDD, 1080p

(Retired) T560: i7-6600U, HD520, 16gb, 512gb SSD, 1620p

(Retired) P650RS: i7-6820HK, 1070, 16gb, 512gb + 1tb HDD, 4k Samsung PLS

(Retired) MBP 2012 Retina: i7-3820QM, GT650M, 16gb, 512gb SSD, 1800p

Link to comment
Share on other sites

Link to post
Share on other sites

@Pendragon do you think that I should be a bit more technically in my explanations or should I keep giving them middle school explanations?

Before you buy amp and dac.  My thoughts on the M50x  Ultimate Ears Reference monitor review I might have a thing for audio...

My main Headphones and IEMs:  K612 pro, HD 25 and Ultimate Ears Reference Monitor, HD 580 with HD 600 grills

DAC and AMP: RME ADI 2 DAC

Speakers: Genelec 8040, System Audio SA205

Receiver: Denon AVR-1612

Desktop: R7 1700, GTX 1080  RX 580 8GB and other stuff

Laptop: ThinkPad P50: i7 6820HQ, M2000M. ThinkPad T420s: i7 2640M, NVS 4200M

Feel free to pm me if you have a question for me or quote me. If you want to hear what I have to say about something just tag me.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Pendragon said:

U need to take a thermal density course too. Total energy generated directly corresponds to watt consumption, yes this is true. However, dissipation of said energy is significantly different depending on the die size of the chip. Hence thermal density. It's MUCH harder to dissipate highly focused heat (pascal) than to dissipate less focused heat (maxwell). Hence you see higher temps in all pascal chips due to the architecture. 

 

It's like sunlight. Unfocused it doesn't do anything. Put a magnifying glass through it and although total energy that passes through said area remains the same, end result is burn marks on the ground. 

Tons of pascal laptop results have been posted on notebookcheck. Feel free to check up. It shows Pascal is a much more efficient architecture consuming less power during gaming loads, more power during max load, and much higher temps. 

You should have read the rest of my post....in which I mentioned that HAD NVidia shrunk the die, then the concentration of energy in a smaller space would have been applicable (which it appears to be in this case).   Prior to doing some research while posting (no specifics on die size that I could find...), I made the assumption that NVidia kept the package and die virtually identical (basically what Intel does....smaller process = more transistors=faster chip), since all of these boutique producers were able to literally drop GTX 10xx chips into existing motherboard configs.

 

What surprises me is that NVidia shrunk the die by dropping the process to 16nm from 28nm, but also dropped a large number of transistors (5.2B to 4.4B).   All they did was turn up the clock, which by itself doesn't generate anymore heat, but the smaller process does lead to greater leakage current (and higher clock settings generally require more voltage).

 

So yes, they shrunk the die (same power in a smaller space....fully agree); my big surprise is that the increase in clock rate has led to such significant increases in performance.

 

 

Patrick            

We specialize in work which few understand

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, GR8-Ride said:

since all of these boutique producers were able to literally drop GTX 10xx chips into existing motherboard configs.

doesn't it make it usable. most of the new pascal laptops are operating in system designed to dissipate 970ms. 

 

27 minutes ago, GR8-Ride said:

All they did was turn up the clock, which by itself doesn't generate anymore heat

Yes it does

 

27 minutes ago, GR8-Ride said:

my big surprise is that the increase in clock rate has led to such significant increases in performance.

Not really. Clock for clock, Pascal and Maxwell are the SAME speed. Pascal is Maxwell on speed. If Pascal isn't cooled properly and allowed to throttle down to Maxwell levels, performance should be the same. 

Laptop Main

(Retired) Zbook 15: i7-6820HQ, M2000M, 32gb, 512gb SSD + 2tb HDD, 4k Dreamcolor

(Retired) Alienware 15 R3: i7-6820HK, GTX1070, 16gb, 512 SSD + 1tb HDD, 1080p

(Retired) T560: i7-6600U, HD520, 16gb, 512gb SSD, 1620p

(Retired) P650RS: i7-6820HK, 1070, 16gb, 512gb + 1tb HDD, 4k Samsung PLS

(Retired) MBP 2012 Retina: i7-3820QM, GT650M, 16gb, 512gb SSD, 1800p

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Pendragon said:

doesn't it make it usable. most of the new pascal laptops are operating in system designed to dissipate 970ms.

Not an argument I was making.   I was simply pointing out that they kept the packaging the same for ease of transition onto existing platforms / motherboards.

37 minutes ago, Pendragon said:

Yes it does

For any given computational effort, it doesn't affect it.   Clock frequency has no impact on power consumption by itself.   Energy consumption is directly proportional to switching frequency + workload over time.    If I can calculate 10 items per second (10 Hz clock), then I consume XX amount of power to close that CMOS gate 10 times in 1 second.   If I can calculate 1 item per second (1 Hz clock), then I still consume XX amount of power to close that CMOS gate 10 times in 10 seconds.   Same power consumption, just over a shorter period of time.

 

For a 970M vs 1060, if they are both performing the same number of calculations in any given time period, they will consume the same amount of power.   If Pascal is asked to perform more calculations in the same time period (and assuming that the CMOS gates require equal power to the Maxwell gates), then yes, it would consume more power.   But it's also doing a greater workload (hence the whole "performance per watt" metric).   The Pascal CMOS gates should require less power to actuate, however, given their much smaller process size.   Gate actuation isn't a linear relationship, however, so a 43% drop in process size won't result in a 43% drop in gate actuation power requirements.   And the drop to 16nm does result in higher leakage current, which again, is where much of the excess GPU heat comes from.

37 minutes ago, Pendragon said:

Not really. Clock for clock, Pascal and Maxwell are the SAME speed. Pascal is Maxwell on speed. If Pascal isn't cooled properly and allowed to throttle down to Maxwell levels, performance should be the same. 

My point was simply that I'm surprised that all NVidia really did was up the clock speed, rather than any radical new architecture, or even an increase in CUDA cores in the chip, and not that Pascal = Maxwell at higher clock rates.   The drop in process to 16nm should have allowed NVidia to really up the number of CUDA cores and gain significant performance enhancements over Maxwell (which I think they did in the 1080....but the 1060 seems to be nothing more than a juiced Maxwell, with fewer cores).

We specialize in work which few understand

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, GR8-Ride said:

SNIP

You should really think of Pascal as Maxwell with a higher clock speed, with a die shrink and software support for DX12 and async compute, because that is really what it is. Just Maxwell 2.0, "we fixed all the really bad flaws"

Before you buy amp and dac.  My thoughts on the M50x  Ultimate Ears Reference monitor review I might have a thing for audio...

My main Headphones and IEMs:  K612 pro, HD 25 and Ultimate Ears Reference Monitor, HD 580 with HD 600 grills

DAC and AMP: RME ADI 2 DAC

Speakers: Genelec 8040, System Audio SA205

Receiver: Denon AVR-1612

Desktop: R7 1700, GTX 1080  RX 580 8GB and other stuff

Laptop: ThinkPad P50: i7 6820HQ, M2000M. ThinkPad T420s: i7 2640M, NVS 4200M

Feel free to pm me if you have a question for me or quote me. If you want to hear what I have to say about something just tag me.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, GR8-Ride said:

For any given computational effort, it doesn't affect it.   Clock frequency has no impact on power consumption by itself.   Energy consumption is directly proportional to switching frequency + workload over time.    If I can calculate 10 items per second (10 Hz clock), then I consume XX amount of power to close that CMOS gate 10 times in 1 second.   If I can calculate 1 item per second (1 Hz clock), then I still consume XX amount of power to close that CMOS gate 10 times in 10 seconds.   Same power consumption, just over a shorter period of time.

 

For a 970M vs 1060, if they are both performing the same number of calculations in any given time period, they will consume the same amount of power.   If Pascal is asked to perform more calculations in the same time period (and assuming that the CMOS gates require equal power to the Maxwell gates), then yes, it would consume more power.   But it's also doing a greater workload (hence the whole "performance per watt" metric).   The Pascal CMOS gates should require less power to actuate, however, given their much smaller process size.   Gate actuation isn't a linear relationship, however, so a 43% drop in process size won't result in a 43% drop in gate actuation power requirements.   And the drop to 16nm does result in higher leakage current, which again, is where much of the excess GPU heat comes from.

This is literally impossible. That would imply Fermi running x computations uses the same energy vs Maxwell. Same logic would apply to Kepler v Maxwell. Given same process size of 28nm, these two DO NOT use the same energy/heat for x computations.  

 

Your point is also invalid for Pascal v Maxwell. Pascal computes more using less power. Pascal down-clocked to Maxwell levels to have similair computational performance also performs cooler and consumes less energy. Architecture plays a huge role in performance and energy consumption and IS NOT standardized across all GPUs. So a 970m vs 1060 performing equal loads would have the 1060 consume less energy with more apparent heat due to thermal density. 

Laptop Main

(Retired) Zbook 15: i7-6820HQ, M2000M, 32gb, 512gb SSD + 2tb HDD, 4k Dreamcolor

(Retired) Alienware 15 R3: i7-6820HK, GTX1070, 16gb, 512 SSD + 1tb HDD, 1080p

(Retired) T560: i7-6600U, HD520, 16gb, 512gb SSD, 1620p

(Retired) P650RS: i7-6820HK, 1070, 16gb, 512gb + 1tb HDD, 4k Samsung PLS

(Retired) MBP 2012 Retina: i7-3820QM, GT650M, 16gb, 512gb SSD, 1800p

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Pendragon said:

This is literally impossible. That would imply Fermi running x computations uses the same energy vs Maxwell. Same logic would apply to Kepler v Maxwell. Given same process size of 28nm, these two DO NOT use the same energy/heat for x computations.  

Remember, it's X computations OVER TIME.   Once the faster chip reaches idle, it's power consumption drops dramatically.   If the CMOS gates aren't changing state, then no power is consumed.

 

If we take the time factor out, and just observe power consumption for X computations, then the faster chip will consume more power.

We specialize in work which few understand

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, GR8-Ride said:

Remember, it's X computations OVER TIME.   Once the faster chip reaches idle, it's power consumption drops dramatically.   If the CMOS gates aren't changing state, then no power is consumed.

It's still no. My example lists X computations over time. If you scale Pascal and Maxwell clocks to a comparable flop of Fermi chips, there is ZERO way that they have the same power consumption. 

 

Comparable tflop would means they complete tasks in similar times with similar computations.

 

Energy used is mostly based on architecture, not on computation. Those ancient fridge sized IBM computers use INSANE amounts of power to do computations.

Laptop Main

(Retired) Zbook 15: i7-6820HQ, M2000M, 32gb, 512gb SSD + 2tb HDD, 4k Dreamcolor

(Retired) Alienware 15 R3: i7-6820HK, GTX1070, 16gb, 512 SSD + 1tb HDD, 1080p

(Retired) T560: i7-6600U, HD520, 16gb, 512gb SSD, 1620p

(Retired) P650RS: i7-6820HK, 1070, 16gb, 512gb + 1tb HDD, 4k Samsung PLS

(Retired) MBP 2012 Retina: i7-3820QM, GT650M, 16gb, 512gb SSD, 1800p

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, rf9661 said:

The new Razer Stealth was already released.....


But, Ill be using macOS most of the time for overall tasks but prefer Windows computers for gaming....

 

 
 

The new Razor Stealth shown off at PAX was not released yet to my knowledge.

 

If you know you're going to be using OSX most of the time, then you already know the answer to this thread. I have a Macbook and I really think they're alright at best. Hardware is mid-range and storage is very small, not worth it for the value or OS.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, shippage said:

The new Razor Stealth shown off at PAX was not released yet to my knowledge.

 

If you know you're going to be using OSX most of the time, then you already know the answer to this thread. I have a Macbook and I really think they're alright at best. Hardware is mid-range and storage is very small, not worth it for the value or OS.

The hardware is top-range and the storage is the same as literallyt eveyr other ultrabook on the market* 

 

/I'm assuming the Macs are running Skylake since they're still running Haswell, which makes a lot of these points invalid against Skylake competitors. and I'm also ignoring the single port Macbook.

CPU: Just as good as the competition 

GPU:

The only weakness that Macs have is in terms of GPU performance. The 13" rMBP (with Iris graphics) is better than most other 13" ultrabooks, which are running HD540. 

The 15" rMBP (with Iris Pro) is better than probably about half of 15" ultrabooks (that are running HD540~) and worse than the other half that are running a 960m/1060m. 

SSD: They basicallty all use equally good SSDs (although the Mac isn't using NVME yet, but it probably will be)

Display: Just as good -- 4k is useless on such small screens due to scaling issues 

Keyboard/Trackpad: Better than most

Build: At least as good as the competition 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, Pendragon said:

It's still no. My example lists X computations over time. If you scale Pascal and Maxwell clocks to a comparable flop of Fermi chips, there is ZERO way that they have the same power consumption. 

 

Comparable tflop would means they complete tasks in similar times with similar computations.

 

Energy used is mostly based on architecture, not on computation. Those ancient fridge sized IBM computers use INSANE amounts of power to do computations.

Ohm's law still applies.   V=I*R and P=V*I.    Nowhere in either of those calculations is there a variable for Freq.

 

For Fermi, the GTX 580M was a 40nm process, with a TDP of 100W as well (similar to the GTX 970M and likely similar to the 1060, though I've heard as low as 80W for that).  Just like the GTX 485M prior to that, the frequency was higher, but power consumption remained the same.

 

As process die shrinks you require less voltage to manipulate the logic gates, and due to shorter interconnects, you draw less current for each circuit.   You *can* require more voltage as frequency goes up IF the logic gates are unable to respond at a higher clock rate with a lower voltage (raising the voltage can equal faster logic gate response, and thus, support for higher clock rates).   Howver, IF the logic gates respond at a higher frequency and DO NOT require greater voltage levels, then power consumption remains the same, or drops (smaller process size means logic gates require less voltage to operate).

 

Remember, there was a massive change in architecture (and a slight bump in frequency) between Fermi and Kepler.  From 384 pipelines in Fermi to 1344 pipelines in Kepler, and from 1.95B transistors to 3.54B transistors (and up to 5.2B transistors in Maxwell).

 

Energy usage is based upon work, not architecture (P=W/t).   You can have one core running at 2 GHz, or 2 cores running at 1 GHz each.   As long as they're both able to get the same work done in the same time, the power consumption will be identical.  CMOS gates that are not changing state consume no power.

 

Remember, YOU guys are the ones who are saying that Pascal is nothing more than a superclocked Maxwell.   If a 1060 were clocked to a 970M clock rate, then the 1060 would consume significantly less power than the 970M (due to lower transistor count and smaller process size).

 

Now due to the smaller die size, if it consumes equivalent power to Maxwell, then the thermal density is higher and you require more effective cooling.

 

Your IBM fridge computer is irrelevant.   We're comparing CPU / GPU die shrinks and frequency increases.....a mainframe or mini of old had far different systems running to draw significant power (multi-disk arrays, large vacuum tube based logic circuits, cooling systems for mechanical elements, etc.    Nowhere in any of our discussions have we been comparing the overall power draw of the entire system (NICs, MB, RAM, HDD, SSDs, USB et al).

 

 

 

Patrick

We specialize in work which few understand

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, GR8-Ride said:

Energy usage is based upon work, not architecture (P=W/t).   You can have one core running at 2 GHz, or 2 cores running at 1 GHz each.   As long as they're both able to get the same work done in the same time, the power consumption will be identical.  CMOS gates that are not changing state consume no power.

Wat? Energy usage is based on architecture. I literally asked someone to benchmark 2 cards just to make a point.

Downclock the 980 to a comparable performance to the 680. And you'll see what I mean. They do the same work because they output the same flops. But the difference in architectures drives the 680 power consumption through the roof compared to 980. 

 

Laptop Main

(Retired) Zbook 15: i7-6820HQ, M2000M, 32gb, 512gb SSD + 2tb HDD, 4k Dreamcolor

(Retired) Alienware 15 R3: i7-6820HK, GTX1070, 16gb, 512 SSD + 1tb HDD, 1080p

(Retired) T560: i7-6600U, HD520, 16gb, 512gb SSD, 1620p

(Retired) P650RS: i7-6820HK, 1070, 16gb, 512gb + 1tb HDD, 4k Samsung PLS

(Retired) MBP 2012 Retina: i7-3820QM, GT650M, 16gb, 512gb SSD, 1800p

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Pendragon said:

Wat? Energy usage is based on architecture. I literally asked someone to benchmark 2 cards just to make a point.

Downclock the 980 to a comparable performance to the 680. And you'll see what I mean. They do the same work because they output the same flops. But the difference in architectures drives the 680 power consumption through the roof compared to 980. 

 

Base clock for a GTX 980 and GTX 680 is roughly 100 MHz apart, and both are 28nm processes.   Boost clock is within 150 MHz of each other (980 faster in both areas).

 

The 980 has 5200 transistors; the 680 3540.   But the power draw could be based upon heat (ie, the GPU card fan), as the 680 is a 294mm die, whereas the 980 is a 398mm die.

 

You'd have to downclock the 980 by 35% to get it to a comparable GFLOPS level as the 680 (4612 vs 3090).

 

Again, I'm talking about the power requirements of the actual GPU itself.  Not the card, not the fan...not anything other than the physical GPU (no cooler) itself.

 

Architecture of the GPU can certainly determine how efficient it is at performing work, however, so you're absolutely right about that one.  I suspect we're talking at cross-purposes when I'm referring to work and you're referring to work.  I'm talking about work as the process of changing the state of logic gates and providing voltage across a circuit to a logic gate (or capacitor, et al).  IE, the classical, physics definition of work (again W=F*D and P=W/t).   10 logic gates at 1 Hz or 1 logic gate at 10 Hz.  Same physical work....same time period....same power requirements.

 

You're referring to how many operations per second can the unit perform, and yes, this is highly dependent upon the architecture; no argument there.   So, my apologies for not being clear on that one.

 

The question now becomes.  Is a 1060 at a base clock of 924 MHz (GTX 970M base clock speed) any faster than the 970M.  And, given the shrunken die size to 16 nm, does the 1060 consume less power than the 970M at 970M clock rates (in theory it should, due to smaller process size).

 

 

 

Patrick

We specialize in work which few understand

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, GR8-Ride said:

I'm talking about work as the process of changing the state of logic gates and providing voltage across a circuit to a logic gate (or capacitor, et al).  IE, the classical, physics definition of work (again W=F*D and P=W/t).   10 logic gates at 1 Hz or 1 logic gate at 10 Hz.  Same physical work....same time period....same power requirements.

I have 0 experience in this one so I'll defer to you. Barely understands what that means. 

 

11 minutes ago, GR8-Ride said:

Is a 1060 at a base clock of 924 MHz (GTX 970M base clock speed) any faster than the 970M.  And, given the shrunken die size to 16 nm, does the 1060 consume less power than the 970M at 970M clock rates (in theory it should, due to smaller process size).

This is an interesting topic. Something I discussed once with D2. How far would you have to under clock to keep temps under control and whether you would see performance gains over the 970m. Personally, I would say no. A few people online such as AdoredTV: https://www.youtube.com/watch?v=nDaekpMBYUA, try to emulate this testing. He however can do it on the desktop. It's quite a quite harder to do this kind of testing on laptops as thermal design would be so important in determining flops that the 1060 can push out. I wouldn't honestly say based on my own desktop testing and others it would be within margin of error of each other +- 3%. 

 

The 1060 definitely consumes less energy than the 970m. No contest. Operating at gaming loads 1060 uses less power than 970ms. Notebookcheck have great measurements to verify this claim. Downclocked, it would consume even less. The reason question is what I would say what I said earlier, how far would u have to downclock to get 970m performance and is there room to go up. Of course, most vBIOSs are locked to all hell so you can't even downclock if you wanted to and you'll have to rely on boost...wait I mean throttle 3.0 to do it. 

 

Laptop Main

(Retired) Zbook 15: i7-6820HQ, M2000M, 32gb, 512gb SSD + 2tb HDD, 4k Dreamcolor

(Retired) Alienware 15 R3: i7-6820HK, GTX1070, 16gb, 512 SSD + 1tb HDD, 1080p

(Retired) T560: i7-6600U, HD520, 16gb, 512gb SSD, 1620p

(Retired) P650RS: i7-6820HK, 1070, 16gb, 512gb + 1tb HDD, 4k Samsung PLS

(Retired) MBP 2012 Retina: i7-3820QM, GT650M, 16gb, 512gb SSD, 1800p

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×