Jump to content

GR8-Ride

Member
  • Posts

    49
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    GR8-Ride got a reaction from JakeP in Wattage Monitor in Menu/Task Bar on M1 Mac?   
    It's iStat Menus 6.   I just installed it on my M1 MBP, and it can show the current wattage draw in the MacOS Menu bar.
  2. Like
    GR8-Ride got a reaction from Pendragon in Should I get the current Razer Blade 2016 (970m) or wait for the refresh 1060?   
    Hang on, we've had this argument already.   Time for you to go read a physics book.  Power = Heat....it's a very linear relationship (mathematically).
     
    Heat is a measurement of energy, and it's measured in Joules, and the formula is 1 Joule = 1 Watt / 1 second.   As wattage goes up, so does energy (ergo, heat).   As wattage goes down, so does energy (ergo, heat).
     
    Temperature, on the other hand, is a measure of the average thermal energy (heat) of all of the molecules in a substance.
     
    You are correct, however, in that thermal density applies in which a smaller object (Pascal GPU) has a higher TEMPERATURE than that of a larger object (Maxwell GPU), even if the same amount of "HEAT" is being generated by both.   Pascal has fewer molecules in it than does Maxwell, therefore the same HEAT results in different TEMPERATURES.   Pascal is also physically smaller (again, fewer molecules), so it has both less surface area to conduct heat away from it, as well as less mass to store heat.
     
    I get the point you're trying to make in that power consumption isn't a linear relationship to TEMPERATURE, but if you're going to tell people to read a physics book, you should understand the differences between HEAT and TEMPERATURE yourself.
     
    @Jayvin
    As to the Razer Blade, it's in the nature of physics that a thin and light gaming laptop is going to run hot and/or loud (or both).   The 2016 Blade seems to be somewhat better than the 2015 Blade in terms of thermal throttling, however it's not exempt from it.   I can make my 2016 Blade throttle when playing games on the internal 970m and running it bone stock (either Balanced or Performance power profiles in Windows 10).   When it throttles it never drops below base clock (2.6 GHz) rates on the CPU, and I haven't seen the GPU throttle yet at all (I run mine in closed, clamshell mode connected to a 4K monitor, playing at 1080p).
     
    I can avoid throttling by disabling Turbo Boost and just running the CPU at it's base clock rate of 2.6 GHz.   No performance issues with Doom on the internal 970m, but it's also not pushing the CPU very hard to begin with.   Again, I've never been able to get my 970m to throttle.
     
    I also run a Razer Core with a GTX 1080 in it, and it never throttles at all under that configuration.   I was actually trying the other night to see how far I could push it (2200 MHz boost clock and 11010 MHz memory clock, successfully), and whether I could get the system to throttle running the external GPU (a big part of why the Blade throttles is the shared heat-pipe between CPU and GPU, in addition to it simply being thin).     Several hours of running Heaven at 4K and various 3DMark benchmarks to find my maximum GTX-1080 overclock, I was never able to get my CPU to throttle.   It does get into the low to mid 90s, however (peak was 93C).   My external GTX 1080 barely gets into the 70Cs.   As a desktop card, it's pretty darn impressive.
     
    Just for giggles, I threw a cheap ($20) laptop cooler underneath it (again, still in clamshell mode), and my CPU temps then peaked at 87C running those same battery of tests.   I actually ran the laptop cooler tests (with and without several times) just to validate that it was actually doing something....over my career I've thought of most laptop coolers as being completely useless.
     
    As to it being loud, I don't notice it at all.   But fan noise is highly subjective, and in my setup, it's all in a closed cabinet in my desk with monitor and keyboard up top.  So fan noise isn't an issue for me.
     
    So honestly, yes, the Blade does run hot with the 970m in it.   To Dackzy's point, I can only imagine that the GTX 1060 version of the Blade would likely run hotter, given that it's physically a smaller die.   It's the first laptop I've ever owned in which I've demonstrated that a laptop cooler actually made a difference.   My general view is that laptop coolers were snake oil.   On the Blade it makes a difference, which is not a good testament for the Blade's cooling system.
     
    Look, the 2016 Blade isn't a bad laptop compared to other thin and light models; I happen to love mine.   But I'm not a college kid, and money isn't an object for me.  With the Blade,  you are paying for the looks, and the all aluminum construction on it.   From a pure performance / dollar equation, there are better deals available (MSI GS63VR or Asus GL502VS).   If you're not gaming much on the go and have a desktop rig at home, then go with something like the Asus UX501 that @don_svetlio mentioned.   It's actually the laptop that I'm suggesting my wife should look at.
     
     
     
    Patrick
     
  3. Agree
    GR8-Ride got a reaction from Pendragon in New Macbook Pro Retina vs. Windows Laptop   
    Honestly, the notion of buying a Macbook Pro, paying the Apple tax, and then running Windows 10 on it seems pointless at best.   And trust me, I've worked with some very smart engineers in the past who have done just that....picked up a retina Macbook Pro, and then run Windows 7 as their sole OS on it.   I get the industrial design element....I've had several Macs in my life and still do.   But a fully loaded rMBP is over $4K CAD and a top notch Windows 10 laptop from Dell or Lenovo will easily run $1,000 less.
     
    I ran Windows 10 in Bootcamp on my rMBP (late 2014 model, fully loaded with the GT750m), and battery life in Windows was horrendous.   Battery life wasn't even great in OSX, but that's because I run Pathfinder as my default desktop / Finder app, which forced the dGPU to run instead of the integrated graphics.
     
    Look, I'm generally an Apple fan, but quite honestly, the Apple tax is right up there, and unless there is a strongly compelling reason to go Apple (and there are some...), you can get a better deal on various Windows machines.
     
    In Apple's favour, the support network is fantastic.   I smoked a power supply in Bangkok, Thailand on a business trip, and I walked into an authorized Apple service center and walked out with a brand new power supply 15 minutes later.   Completely free of charge.   That was 10+ years ago, and their service and support has been good to me ever since.
     
    Also, they generally just work, but I will say, with Windows 10, it's been remarkably stable for me as well.   I honestly think of OS X and Windows 10 as a wash when it comes to stability these days.
     
    The only other real advantage for Apple is the *nix sub-system.   If you REALLY need a Unix-like subsystem, then Apple is the way to go.   That carried some "geek cred" a few years back, but these days it's not worth much.   Especially since it's so easy to pick up a Windows machine and dual-boot into Ubuntu or some other flavour of Linux.
     
    As to the Surface Book, I'd stay away from it.   Great concept....the execution is lousy.   I'd go with the Surface Pro 4 instead.   In tablet mode, you lose all of the ports, and most of the battery life on the Surface Book.   And when it's connected to the keyboard base, the screen is so wobbly that pen input and touch input becomes a jiggly nightmare (try it for yourself at any MS store).   The SP4, on the other hand, has the kickstand which makes it incredibly stable for touch / pen based input.
     
     
     
    Patrick
  4. Agree
    GR8-Ride got a reaction from Pendragon in General Thoughts About Pascal GTX 1060 (Laptop)   
    They're guessing that it has a slightly lower TDP, which may or may not be accurate (it DOES has fewer transistors, but also a much, much higher clock speed).
     
    It is, however, a smaller die size, which means higher temperatures, even if they both generate the same amount of heat.   This is especially true if the die size is significantly smaller, which I suspect it is, given the process shrink (28nm to 16nm) and the drop in transistor count from 5.2B to 4.4B (GTX 970m/980m vs GTX 1060 laptop).
     
     
    Patrick
  5. Informative
    GR8-Ride got a reaction from FuzzyYellow in Car Enthusiast Club [Now Motorcycle friendly!] - First thread to 150k! ¯\_(ツ)_/¯   
    Well, not exactly that simple, but yes, more cylinders does generally = more torque (and more parasitic drag as well).
     
    The larger the bore, the greater the ability for the engine to breathe, as well as the ability to use larger valves (again, better breathing....thus more power).
     
    The longer the stroke, then generally the greater the mechanical advantage for the rod pushing down on the crankshaft (in theory, more torque).
     
    However, like all things when it comes to building an engine, there are tradeoffs.   A longer stroke means greater parasitic losses (ring drag) as well, and at some point in time, the combustion will reach a point where it's no longer producing significant force on the piston / rod / crankshaft.   So there is a limit to how long the stroke can (or should) be.
     
    For street use, a square engine is generally the ideal, though with modern forced air systems that changes the design criteria quite a bit.
     
    For a race engine, I'd probably take a bigger bore over a longer stroke any day (bigger bore + shorter stroke will likely give me a higher and broader RPM range to operate in).
     
     
    Patrick
  6. Informative
    GR8-Ride got a reaction from DutchTexan in Are you breaking up with Apple?   
    I want to like Windows Phone....but the overall ecosystem is just too small.   Might not stop me from trying it, however....just not sure if I can commit to Windows Phone as my sole platform.
     
    My big problem with Samsung is that they effectively abandon their phone models within a year of releasing them.   So updates from Kit-Kat to Marshmallow to Lollipop to Nougat are basically non-existent (either carrier dependent, or so way, way behind that you're either vulnerable security-wise, or simply missing compelling new OS features).   It's not like the hardware is incapable of running 5.x or 6.x or 7.x....just that Samsung virtually abandons the phone after 12 months (or sooner).   At least with the iPhone, you know it will get updates for the next 2-4 years.   In theory, Windows Phone has been doing the same.   Google's Nexus phones are good this way, but again, you lose some Android common features (ie, expandable memory via micro-SDXC cards, for one).
     
    I have an iPad Air...that's my standard bed / couch surfing device...works pretty well for that.   Haven't found an Android tablet that works as well as the iPad overall, though I had a Surface Pro 4 that was pretty good.   Wasn't great for ebooks, but surfing and YT it worked at least as well as the iPad (although a bit heavier).
     
    I've gone back to Windows 10 laptops as well, though I still  have a Macbook Air for work.   Personally, I'll probably switch to a Windows 10 laptop whenever my HW refresh date comes through for work.   It's not that OS X has let me down in any way....just that Windows 10 has become very, very good, compared to the Win XP / Win 7 / 8 experiences of old.
     
    I was always a pretty big Apple fan, but actually typing all of this out, I have come to realize that I seem to be moving away from Apple.   And honestly, I'm okay with that.
     
     
     
    Patrick
  7. Informative
    GR8-Ride got a reaction from thething55 in Confirming the difference between MIMO and channel width   
    MIMO and channel bandwidth are two very separate items.   Perhaps the best way to understand the differences between MIMO and channel bandwidth is to refer back to the original technical definitions of MIMO (two types).
     
    The first is Space-Time Block Coding.   This is where duplicate data streams are sent from multiple antennas, with the intent of ensuring that at least one clean signal reaches the end device (or is received from the end device).   The purpose behind STBC is to improve signal to noise ratio, with the goal of improving the modulation and coding scheme for the WiFi connection.    This is one form of MIMO, and it uses the same frequency and channel size for each antenna.   STBC works even if the client device only has a single RF antenna.
     
    The second type is Spatial Multiplexing.   There is where unique data streams are sent from multiple antennas, with the intent of doubling (or tripling) effective throughput.   This is done by carefully co-ordinating the received signals and reassembling the data stream.   Antenna spacing is critical, as the distance between antennas needs to be a multiple of the wavelength, in order for it to be effective.   Again, SM uses the same frequency and channel bandwidth for each antenna.
     
    Hopefully this helps.
     
     
     
    Patrck
  8. Informative
    GR8-Ride got a reaction from chaozbandit in Car Enthusiast Club [Now Motorcycle friendly!] - First thread to 150k! ¯\_(ツ)_/¯   
    I used to run Stoptech's on my E36 racecar....they were always pretty reliable brakes for me.
     
    I've been thinking about a set for my M4, but I haven't found the stock brakes to be too bad thus far.   I think the stock rotors for the M4 might be a little too thin for a long lifespan.
     
     
    Patrick
  9. Agree
    GR8-Ride got a reaction from leadeater in How do ISPs limit speed?   
    I think it's important to note, however, that very little pure "Ethernet" traffic exists on your ISP's network, or the internet in general.   The backbone is primarily MPLS, which has multiple levels of QoS / CoS / ToS capabilities in place.
     
    The idea behind Frame Relay was to implement packet-switched data transport capabilities onto a circuit switched network.   The problem is, traditional circuit switched networks are not terribly resilient (ie, a break in the path results in a network-down condition), nor are they terribly adaptable to changing traffic flows.   Despite all of it's QoS capabilities, Frame Relay was never very good at transporting voice or video streams.
     
    ATM was an attempt to fix much of what was wrong with Frame Relay, while also introducing network re-routing capabilities (via SVCs and Soft PVCs, for example) and packet based switching (vs circuit based, such as Frame Relay).
     
    Frame Relay still relies heavily upon the establishment of PVCs (permanent virtual circuits).  So while the end-user might only get charged for the number of frames actually transported, the underlying link itself was essentially still a permanent circuit. 
     
    MPLS VPNs can be configured with very similar QoS parameters similar to those of ATM or Frame Relay (ie, guaranteed pipe method, similar to a PVC).  The difference is, with MPLS VPNs you can have multiple paths to get to the destination, thus allowing consistent throughput even in the event of a network outage affecting a single path.
     
     
     
    Patrick
  10. Like
    GR8-Ride got a reaction from dalekphalm in Canada (as some of you know it, great white north) Internet.   
    I pay $115 CAD ($87 USD) per month for 250 Mbps down / 20 Mbps up, with no bandwidth caps in Ontario with Cogeco.   No complaints from my side....they've been an excellent ISP.
  11. Funny
    GR8-Ride got a reaction from Dackzy in New Razer Blade or MacBook Pro   
    You keep saying that, and it's time that somebody stepped up and defended Razer to some extent.   The loudest voices on here have a hate for Razer, and it's time to step up and call bullshit on some of the hate.
     
    Yes, the Razer Blade *can* throttle when playing demanding games, and the design of using a shared heat-pipe between the CPU and GPU means that both contribute heat into it.  From a design perspective, it's far less than ideal, but it's also not an uncommon practice if you want a thin and light laptop (see the latest MSI GS63VR for a comparison, or the MS GS60 before it).   Every thin and light "gaming" laptop will throttle to some extent.   The Blade throttles FAR, FAR less than I expected when I purchased it, given the noise on here and Reddit.
     
    The thermal throttling on the Blade can be mitigated by either turning off Turbo boost, or by undervolting.   Again, if someone wants a thin and light laptop, heat and fan noise are the tradeoffs that come with it.  By the way, I run my Blade in closed, clamshell mode, driving an external 4K monitor.   I can play Doom for 2+ hours, and completely stock, only 1 or 2 cores will thermal throttle.   GPU temps don't go above 87C.   With Turbo boost off, CPU temps don't peak above 70C, and GPU peaks at 83C.
     
    As to build quality, I've had no issues with my Blade, and it's been as solid and as reliable as the various Mac's I've had over the past 10 years.
     
    The keyboard is at least equivalent to the keyboard on my Macbook Air (sitting right in front of me), and fairly close to the keyboard on my 15 rMBP (the rMBP is still a bit better).  No laptop keyboard is as good as my old Thinkpad T40p that I had 10 years ago, but I have no complaints about the Blade's keyboard.   Not as good as the Thinkpad keyboards (I liked the keyboard on my X1 Carbon...except for the stupid LCD 'virtual' function keys).
     
    Trackpad is okay....close to the MBA / rMBP for feel, but far, far better than I've had on any other Windows based laptop recently (including my MSI GS60 that was a horrendous piece of shit).   Trackpad buttons feel cheap though...honestly I'd still prefer a Trackpoint if one was available.
     
    Razer Blade industrial design is the equivalent of the Apple....all aluminum, minimum (if any) flex in the chassis (again, unlike my GS60, or the Gigabyte P35xV5 I looked at).
     
    People buy laptops for a variety of purposes, and a variety of personal reasons.   If a person wants a dedicated gaming machine as their primary purpose, then a desktop or a desktop replacement machine are far better choices than ANY thin and light laptop, including the Blade.
     
    If a person wants a laptop that can travel very well (especially for those of us who travel extensively for business, and have to utilize a corporate machine for work activity), then something thin and light becomes a requirement.    This is where my Blade fits my own, perhaps unique, use-case.  It's light-weight, and I can readily carry it and my work MBA in a briefcase and a plane.  I can play games on it while in the hotel, and also use it for my own media / entertainment.
     
    Industrial design is also an important factor for a lot of people.   There is a reason why people would rather drive a BMW X5 instead of a Citroen Picasso, even though the Citroen may be a more practical vehicle, and just as fast on city streets (ie, speed limit is the same for both).
     
    Flame suit on.   Let the hate begin.
     
     
     
    Patrick
  12. Agree
    GR8-Ride got a reaction from 8uhbbhu8 in gigabit switch and my download issues   
    The link between your Bell Aliant router and the switch is probably negotiated to 100 Mbps, vs 1,000 Mbps.   This is why you're getting 300+ Mbps on WiFi (direct to the Aliant router), and only 95 Mbps via the TP-Link switch.
     
    What is the model number of your gateway from Aliant?    Based on your testing above, I'm guessing that the auto-negotiation between the Aliant router and your TP-Link switch isn't working well, so the best link it can negotiate is 100 Mbps vs 1 Gbps, or that it's getting a half-duplex link at 1.0 Gbps (more likely it's FD at 100 Mbps).
     
    There is no configuration GUI for the TP-Link switch (it's basically a single config box...can't do much with it), so I'm guessing that it's just not auto-negotiating with the Aliant modem/router very well.   You can try a couple of things...a different port on both the switch and the router, and/or try a different cable between the switch and router.
     
    There is an easy way to determine if it's a bad port, or simply an incompatibility between the boxes.   First, plug your PC into a switch port directly on the router.   Check on your laptop / PC to ensure that you're getting 1.0 Gbps (as you've shown above).   Aliant router port #1 (or 2, or 3, or 4) --> PC ethernet port
     
    Then, with that same cable. move the cable from your Aliant router to any port on the TP-Link switch (still connected to your PC/laptop on the other end).  Check to see if it's establishing a link at 1.0 Gbps as well (on your PC).  TP-Link switch port #1 (or 2, or 3, or 4) --> PC ethernet port.
     
    If both of those tests show a 1.0 Gbps connection to your PC, then each port is theoretically fine.  Move the same cable to between your Aliant router and TP-Link switch.  Aliant router port #1 --> TP-Link switch port #1.
     
    Plug your laptop into a different port, and see what happens.
     
    Auto-negotiation isn't a perfect standard within 802.1 space...
     
     
     
    Patrick
  13. Like
    GR8-Ride got a reaction from Pendragon in RAzer blade 2016 and razer in general   
    I'll debate this to some extent.   I think the issue with Razer (and in my unfortunate experience, MSI also) is that it really depends on how lucky you are.
     
    I had a GS60 (4K) before I bought my 2016 Blade, and I had some build quality issues with it (tons of flex in the display and keyboard surround, along with failed keys).  It also thermal throttled when playing Doom on it (CPU spikes at 100C).
     
    My 2016 Blade has none of the feared build quality issues, and while it will thermal throttle playing Doom as well, I can mitigate this by disabling Intel Turbo boost and keep both CPU (74c) and GPU (83c) temps under control.   This is in clamshell mode, driving an eternal 4K display (Doom at 1080P, 60+ FPS).
     
    So I setup a custom power profile, and no issues (no lag due to CPU restrictions) and no throttling of either the CPU or GPU.
     
    So I've been lucky in that I've gotten a cherry model.  Maybe this means that Razer and their OEM have resolved their QC issues, or maybe I'm just extra lucky.
     
     
     
    Patrick
  14. Like
    GR8-Ride got a reaction from benga in Do different Wireless routers really have different range(distance)?   
    So there are a few basic elements that determine the range and throughput of a wireless network.   The most important thing to remember is that it's a bi-directional link, so even if you're running 4W of EIRP (1W from the radio and 6 dBi from the antenna), the uplink from your laptop, tablet, smartphone or other WiFi enabled device is likely to be much lower.   The WiFi link is only as good as it's upstream connection; if the client device can't respond to beacons from the base-station, it's a useless signal.   4W of EIRP is the FCC max for WiFi devices in the US (2.4GHz), so going to higher gain antennas won't necessarily make your downstream connection any better (due to a drop in output power).
     
    For 5 GHz, the EIRP limit ranges from 40mw (UNII low band) up to 800mw (UNII-3 upper band).   Both are still significantly less than the 4W EIRP of 2.4 GHz.   So for 5 GHz, you have two things going against it....one, the lower mandated output power, and two, the poorer propagation rates of 5GHz vs 2.4 GHz.
     
    Also, there is no free lunch in wireless networking.   A higher gain antenna will change the antenna pattern somewhat, so moving to a higher gain omni-directional antenna may only serve to limit the vertical range of your WiFi network, and not help your situation at all.   Going even higher gain would likely result in the move to a directional antenna, vs an omni.
     
    Channel size has a distinct effect on network throughput and range as well.   One thing to note though, the larger the channel size, the weaker the overall gain (since the same physical output power has to be spread across more spectrum).   So in the 802.11AC realm, an 80 MHz channel literally has half the radiated power (per subcarrier) of a 40 MHz channel, which again has half the power of a 20 MHz channel.   So the higher the throughput, the lower the overall power per subcarrier.  So an 80 MHz channel could result in lower WiFi coverage vs a 20 MHz channel.
     
    So, two technologies to the rescue.  The first is MIMO (multi-input, multi-output), which is broken down into two main areas.  Space-time block coding, and spatial-multiplexing.  STBC is where duplicate signals are sent and received on multiple antennas, and the best signal is kept.   In theory, this should improve the range of your WiFi network.
     
    Spatial multiplexing is when discrete signals are sent on each antenna, and in effect you get improved throughput (since instead of sending one data packet over two antennas, you send two packets, each one via it's own antenna, and theoretically doubling the throughput).   You'll see terms like 3x3:3 or 4x4:4 MIMO (mostly for Wave 2 802.11ac products).
     
    The second technology is beam-forming, which is when a WiFi basestation will adjust the phase of the transmission from multiple antennas to effectively combine signals in a particular direction (hopefully in the direction of your laptop).  Beamforming, like STBC is designed to improve range.   Most of the latest Broadcom based WiFi routers (Linksys, Netgear et al) have support for MIMO and Beamforming.
     
    There are other elements that we could discuss, such as modulation and coding schemes (affects throughput) and interference (affects everything), but that's probably enough of me rambling on.
     
    For recommendations, I recently picked up a Linksys EA9500 (8 GigE ports, 8 antenna, tri-band WiFi router) to cover my entire house.   It sits in my office upstairs, with a WiFi connection to my media server two floors below in my basement.  I can sustain file transfers of over 300 Mbps from an 802.11AC client in my basement.   Pretty much any of the latest, Broadcom based 802.11ac Wave 2 products will perform the same way (I bought the Linksys because I needed the 8 GigE ports more than anything).
     
     
     
    Patrick
     
     
  15. Informative
    GR8-Ride got a reaction from matthew81202 in Can a Surface really replace a laptop?   
    I'm not so sure that it matters if you'll use the Surface Pro 4 as a tablet more than a laptop, as in my own personal case, I use my Surface Pro 4 more as a laptop than I did as a pure tablet.   As to power, I went high-end, with the i7 / 16 GB / 512GB model, but this is dependent upon your own budget and requirements.   I went in with the idea that mine was going to replace my personal laptop, which it did quite successfully (my personal laptop was a 15 rMBP / i7 / 1TB / 16 GB model).
     
    I rarely use mine a whole bunch on my lap, but when I do, it functions relatively well.   When I do use it in tablet mode (laying on the sofa, or in bed) it's far better than any laptop I've ever owned.
     
    For pen usage, I actually prefer the SP4 vs the Surface Book, as I found the Surface Book to have a fairly wobbly screen for either pen or touch input when it's sitting on a desk.   The SP4 was much better braced for finger or pen input with the kickstand out.
     
    I was reluctant to sell my rMBP prior to getting fully comfortable with the SP4, as I was both very used to OSX, and unsure of how well a tablet / laptop hybrid could function as my primary machine.  After the last 7 months, I can say that the SP4 has done an excellent job as my primary machine.
     
     
    Patrick
×