Jump to content

WMGroomAK

Member
  • Posts

    1,157
  • Joined

  • Last visited

Posts posted by WMGroomAK

  1. 3 hours ago, Taf the Ghost said:

    At the end of the video, Nvidia apparently sent out a late email saying not to do tear downs. However, Hardware Unboxed, being Aussies, has their video set to auto post most of the time. Looks like it went live around Midnight in Australia with a tear down of the PCB, though not much analysis of the parts. 8+2 VRM setup, though.

    Well, they decided to post their tear-down video...

  2. 1 hour ago, VegetableStu said:

    Not sure if I'd look forward to a Turing Titan, being a potential successor to the $3000 Volta Titan V o_o

    It appears that the full GT102 is in the Quadro RTX 6000, which is being priced at $6300.00... 

     

    The main difference being that the Quadro card has:

    • 2 additional TPCs,
    • 4 additional SMs,
    • 256 additional CUDA Cores,
    • 32 Additional Tensor Cores,
    • 4 Additional RT Cores,
    • higher base clock at 1455 Mhz,
    • higher boost clock at 1770 Mhz, 
    • more memory at 24 GB,
    • 8 additional ROPs & 
    • 14 additional Texture units.  

    Of course this is a Quadro card so that probably accounts for about $4000.00 right away, but it would be interesting to know if the 2080 Ti would perform reasonably similarly in some of the Quadro workloads...  

  3. 5 minutes ago, VegetableStu said:

    ahh yes, the "dark scenes are easier to CGI" problem ._.

     

    1 hour ago, asus killer said:

    +1

     

    they lost me too, so they turn DLSS on and get better framerates??!!

    Someone over on HardOCP posted a link to nVidia's Whitepaper on the Turing architecture so I'll link it here...

     

    https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf

     

    The section on DLSS is Page 35 thru 37 (42 thru 44 in pdf), it also details more of the other shading and infilling techniques that they are implementing, some of which look familiar from the last several years of image testing they've released online.

  4. 57 minutes ago, VegetableStu said:

    imagine rendering 1440p but only drawing 1080p and estimating the rest

    (still want comparison pics though)

     

    57 minutes ago, Taf the Ghost said:

    DLSS should replace the normal AA functions, offloading it to otherwise unused hardware on the GPU. That's where the FPS jump comes from, as the normal rendering doesn't have to do AA + other functions.

     

    1 hour ago, asus killer said:

    +1

     

    they lost me too, so they turn DLSS on and get better framerates??!!

    So PCPer has an article out detailing the Turing Architecture in the same vein as Gamers Nexus video...  Within the article is a small section detailing DLSS, although still without the detail I would like on the blackbox workings of it.

     

    https://www.pcper.com/reviews/Graphics-Cards/Architecture-NVIDIAs-RTX-GPUs-Turing-Explored/RTX-Features-Ray-Tracing-and-DL

    Quote

    Using the Tensor cores found in Turing for its inference capabilities, DLSS is a technology that aims to apply deep learning techniques to accelerate and increase the quality of post-processed anti-aliasing.

    To implement DLSS, NVIDIA takes an early build of the given game (which they generally receive anyway for driver optimization) and generates a series of "ground truth" images rendered through 64x Super Sampling.

    turing-dlss2.PNG.1674793f0f10255a4b1b3c79d7b519a2.PNG

    These extremely high-resolution images are then used to train a neural network which is capable of producing output images that NVIDIA claims are equivalent nearly identical to the original 64x Super-Sampled source material.

    In this current stage, the neural network model needs to be trained for each specific game title. In the future, NVIDIA might be able to come up with more generic models that can be applied to particular genres of games, or different game engines, but at this point, it requires hand-tuning from NVIDIA.

    Regardless, NVIDIA claims that implementing DLSS will cost game developers nothing and that they are committed to scaling their workflow and supercomputers used for training as far as necessary to meet demand.

    This neural network model is then distributed via GeForce Experience to end users who have a GPU with tensor cores and have the given game installed. This distribution model is vital as it allows NVIDIA to silently update the model in the background as they come up with improvements as they get more experience and come up with better techniques.

    turing-dlss3.PNG.2392e64c101e5a3d1cb48026bc7cf211.PNG

    Performance wise, NVIDIA is comparing the performance hit of enabling DLSS to their most recent push of anti-aliasing technology, TAA. While TAA is already a reasonably lightweight method, NVIDIA is claiming performance benefits of 2X when comparing DLSS to TAA. Also, DLSS is temporally stable, unlike TAA which can help prevent some fast moving details becoming blurred on screen.

    While DLSS as a whole seems like a worthwhile initiative, with claims of 64x SSAA quality with very little of a performance hit, the more controversial part is NVIDIA pushing DLSS as a significant performance differentiator from previous GPUs. 

     

    Kind of wish there were better scenes that they were rendering via this method with more contrast...  Maybe something not so dark on a black slide?

  5. 3 hours ago, VegetableStu said:

    I'm not sure how to write this into tech news, so for everyone's info quickly....

    This is actually the best 'unboxing day' video that I've seen yet!  Would be great to have @LinusTech and GamersNexus do a short series of videos (Maybe Techquickies?) to go over the details of nVidia's RTX, Ray Tracing (History of and modern implementation) and DLSS.  I'll try to see if I can type up some of the relevant points from this into the thread...

  6. Just now, Lathlaer said:

    But from what I have been seeing recently it almost sounds like enabling DLSS actually gives you MORE FPS than playing without any form of AA which would be freaking miraculous.

    My basic understanding of what DLSS does is that it is an upscaling technology so you are taking something like 1080p or 2160p framerates and upscaling it to 4k resolution.

  7. At GTC Japan today, nVidia is still building up the hype for the upcoming RTX 2080 & 2080 Ti by releasing some comparison graphs of Turing vs Pascal & Maxwell performance at 4k 60 FPS showing that the 2080 and 2080 Ti will be hitting or exceeding this benchmark, while both Pascal & Turing fall behind.  They also indicate that enabling DLSS will allow for '4k 60 FPS' performance in excess of without DLSS.  

     

    4k 60 FPS w/o DLSS:

    13121249271l.jpg.75ebb9e47907b534cac27b2c801f7d43.jpg

     

    4k 60 FPS w/ DLSS

    13121249691l.jpg.cb4eef0a35baad48c08ac523f1fe15b4.jpg

     

    https://www.overclock3d.net/news/gpu_displays/nvidia_reveals_rtx_2080_and_rtx_2080_ti_performance_data_at_gtc_japan_2018/1

    Quote

    At GTC Japan, Nvidia released two new marketing slides for their RTX 2080 and RTX 2080 Ti graphics cards, showcasing the relative performance of both graphics cards when compared to Nvidia's GTX 1080 and GTX 1080 Ti. 

     

    In the slide below, Nvidia claimed that their RTX 2080 is good enough for 4K 60FPS gameplay, placing their RTX 2080 above their GTX 1080 Ti in terms of raw performance, with the company's RTX 2080 Ti delivering even higher performance levels. 

     

    Looking at the slide Nvidia compares the performance jump between GTX 10 series and the RTX 20 series as being similar to the performance leap when moving between the GTX 9 and GTX 10 series', a significant performance uptick even without the use of RTX features like Ray Tracing and DLSS (Deep Learning Super Sampling). The only issue here is that Nvidia's new 20 series products also ship with significantly higher MSRPs than their predecessors, a fact that is not considered in this "performance-centric" graph. 

     

    When Deep Learning Super Sampling is added into the mix, the performance gap between the RTX 20 series and the GTX 10 series gets significantly wider, delivering much higher performance levels. 

     

    So far, Nvidia has confirmed that DLSS will be added to 25 games, utilising Nvidia's AI performance to deliver similar levels of image quality to a native resolution presentation, but with significantly lower computational requirements, providing higher framerates. More information about DLSS and its supported games is available to read here. 

    nVidia has also added 9 more games to the DLSS supported list, making for a total of at least 25 games that will have DLSS support.  This will include Darksiders III, Overkill's The Walking Dead, Fear the Wolves & Hellblade.  

     

    https://www.overclock3d.net/news/software/nvidia_builds_dlss_momentum_reveals_9_new_rtx_enabled_games/1

    Quote

    In effect, DLSS allows Nvidia RTX graphics cards to play games at higher resolutions with lower amounts of computational performance, upscaling images with an AI-made algorithm to offer image quality levels that are similar to a native resolution presentation. In short, this technology allows supported games to run faster on Nvidia RTX enabled graphics cards, making high framerate gameplay at high resolutions more achievable than ever before. 

     

    So far Nvidia has announced that sixteen games will support the company's DLSS technology, but now we can add nine more games to the list, which include Devolver Digital's SCUM, Overkill's The Walking Dead and Ninja Theory's Hellblade Senua's Sacrifice. All titles that are new to this list will have "Newly Added" listed beside the game's name. No games support DLSS at the time of writing, the titles listed below have agreed to support DLSS, with launch games adding support via future patches/updates. 
     
    - Ark: Survival Evolved
    - Atomic Heart
    - Darksiders III - Newly added
    - Dauntless
    - Deliver Us The Moon: Fortuna - Newly Added
    - Fear The Wolves - Newly Added
    - Final Fantasy XV
    - Fractured Lands
    - Hellblade: Senua's Sacrifice - Newly added
    - Hitman 2
    - Islands of Nyne
    - Justice
    - JX3
    - KINETIK - Newly added
    - Mechwarrior 5: Mercenaries
    - Outpost Zero - Newly Added
    - Overkill's The Walking Dead - Newly Added
    - Player Unknown's Battlegrounds
    - Remnant: From the Ashes
    - SCUM - Newly Added
    - Serious Sam 4: Planet Badass
    - Shadow of the Tomb Raider
    - Stormdivers - Newly Added
    - The Forge Arena
    - We Happy Few

    Honestly, looking at the graphs provided is extremely disappointing just because they are horrible graphs with no relative meaning.  Still, it will be nice to see if they can justify the price increase with consistent 4k 60FPS+ performance...

     

    Update:  With today apparently being unboxing day for the new cards, Gamers Nexus has put out their take of an unboxing by releasing a technical dive into the Turing Architecture that provides some better details on how nVidia is calculating Ray Tracing and how it is being implemented, changes from Pascal to Turing and details on the TU102 chip.  Definitely would encourage a watch as this is probably going to be one of the better release videos out there.

     

    Some key takeaways is that: 

     

    • FP32 * 0.8 + INT32 * 0.28 + RTOPS * 0.4 + Tensor * 0.2 = nVidia new metric for RTX Ops
    • Unified L1 and SRAM for a total of 32 KB + 64KB
    • 2 SMs per a TPC instead of single SM per TPC (better segmentation and memory/cache untilization)

     

    Comparison of Full TU102 and RTX 2080Ti Specs (corrected typo to the chart is that the RTX 2080Ti has 4352 CUDA cores, not 4532):

    Picture1.jpg.b5463549cc47e549f0cf0d94b9de7fcc.jpg

     

    There is also a better explanation of the Ray Tracing processing at around the 22:20 mark...

     

    With the specs listed though for the Full TU102, I would not be surprised if nVidia decides to release a RTX Titan card sometime in January or so...

     

    Just going to add that PCPer also has an article up detailing a lot of the same information on the Turing Architecture for those who would like to see all the pretty graphs and read about it...  They also mention that there will be NVLink with 50 GB/s in the single link found on the 2080 and 100 GB/s with the dual link connection on the 2080Ti.  It looks like the NVLink bridges will be an additional $79.00 (in addition to the second card).  Finally they have a short discussion on the implementation of DLSS, although the details are still not quite as detailed as I would like...

     

    https://www.pcper.com/reviews/Graphics-Cards/Architecture-NVIDIAs-RTX-GPUs-Turing-Explored

     

    Quote

    DLSS

    While NVIDIA has been priming gamers to get ready for Ray Tracing and their RTX API through several announcements over 2018, the more surprising part of the technology suite that NVIDIA is referring to as RTX is DLSS.

    Using the Tensor cores found in Turing for its inference capabilities, DLSS is a technology that aims to apply deep learning techniques to accelerate and increase the quality of post-processed anti-aliasing.

    To implement DLSS, NVIDIA takes an early build of the given game (which they generally receive anyway for driver optimization) and generates a series of "ground truth" images rendered through 64x Super Sampling.

    These extremely high-resolution images are then used to train a neural network which is capable of producing output images that NVIDIA claims are equivalent nearly identical to the original 64x Super-Sampled source material.

    In this current stage, the neural network model needs to be trained for each specific game title. In the future, NVIDIA might be able to come up with more generic models that can be applied to particular genres of games, or different game engines, but at this point, it requires hand-tuning from NVIDIA.

    Regardless, NVIDIA claims that implementing DLSS will cost game developers nothing and that they are committed to scaling their workflow and supercomputers used for training as far as necessary to meet demand.

    This neural network model is then distributed via GeForce Experience to end users who have a GPU with tensor cores and have the given game installed. This distribution model is vital as it allows NVIDIA to silently update the model in the background as they come up with improvements as they get more experience and come up with better techniques.

    Performance wise, NVIDIA is comparing the performance hit of enabling DLSS to their most recent push of anti-aliasing technology, TAA. While TAA is already a reasonably lightweight method, NVIDIA is claiming performance benefits of 2X when comparing DLSS to TAA. Also, DLSS is temporally stable, unlike TAA which can help prevent some fast moving details becoming blurred on screen.

    While DLSS as a whole seems like a worthwhile initiative, with claims of 64x SSAA quality with very little of a performance hit, the more controversial part is NVIDIA pushing DLSS as a significant performance differentiator from previous GPUs. 

    A lot of the early performance claims that NVIDIA has been making about these Turing GPUs go out of their way to also show DLSS as part of the performance increase story of Turing over Pascal.

    Even if these performance numbers are indicative of the advantages that DLSS can give over traditional AA techniques, you have to place a lot of faith in this feature being implemented in the games you want to play to take this into account vis-a-vie a purchasing decision.

     

  8. An article on The Register serves as a good Public Service Announcement (PSA) on why we should maintain local & backed up copies of movies and music purchased in digital format. Essentially, what has happened is that a biologist found out that three movies he had purchased through iTunes had disappeared from his library.  When inquiring with Apple as to where his movies had gone, he learned that Apple had lost the licensing rights to those movies.  Basically, if he had not downloaded and saved a copy on local storage, he lost those movies completely.  Apple did offer up something in the way of recompense in the form of four movie rentals up to $5.99 a piece, however that was more a discretionary offer.  

     

    https://www.theregister.co.uk/2018/09/12/apple_film_rights/

    Quote

    Biologist Anders Gonçalves da Silva was surprised this week to find three movies he had purchased through iTunes simply disappeared one day from his library. So he contacted Apple to find out what had happened.

     

    And Apple told him it no longer had the license rights for those movies so they had been removed. To which he of course responded: Ah, but I didn't rent them, I actually bought them through your "buy" option.

     

    At which point da Silva learnt a valuable lesson about the realities of digital purchases and modern licensing rules: While he had bought the movies, what he had actually paid for was the ability to download the movie to his hard drive.

     

    "Please be informed that the iTunes/App Store is a store front that give content providers a platform or a place to sell their items," the company informed him. "We can only offer what has been made available to us. Since the content provider has removed these movies… I am unable to provide you the copy of the movies."

     

    Sure, he could stream it whenever he wanted since he had bought it, but once those licensing rights were up, if he hadn't downloaded the movie, it was gone – forever.

    ...

    If other words, Apple has complete discretion over whether to refund you in full, in part, or not at all. And in this case it used its discretion to grant him another two movie rental credits of $5.99 or less.

     

    Of course from Apple's perspective, it is being perfectly reasonable: it literally does not have the right to provide access to a movie if and when the licensing rights expire. And in good faith it has offered him $24 in equivalent credits to make up for his loss.

     

    But it's safe to say that almost no one understands that when you "buy" a movie online, you are only buying the right to grab a digital copy on that day. Apple suggests that people may want to download their purchases – but it's far from clear how many people actually do.

    ...

    And it's not fair to single out just Apple either: pretty much every provider of digital content has the same rules. Amazon got in hot water a few years ago when its deal with Disney expired and customers discovered that their expensive movie purchases vanished over night. In 2009 thee was a similar ruckus when it pulled George Orwell's classic 1984 from Kindles without notice.

     

    In reality of course, these huge companies go to great lengths to ensure that their licensing deals with the main content companies are retained so the situation happens only occasionally. And such deals are usually worth so much to both sides that they are continually renewed.

    ...

    And while the answer is to download movies, the reality is that they take up an enormous amount of space. The base level AppleTV for example comes with just 32GB of space. A DVD quality movie will typically run to around 4GB, and a Blu-ray movie to 7 or 8GB: meaning you can only download between four and eight movies before you're out of space.

     

    It's something that is only likely to be resolved when there is a big punch-up between two big companies and the lawsuits start flying. Of course it would be much easier if before that happened companies like Apple implemented a raft of new measures, such as giving customers that have "bought" a movie advance notice of the need to download a movie; or negotiating new digital download rights to fit with the modern streaming era.

     

    But that is unlikely to happen until it has to. Which means that the best advice is quite simple: if you want to own a movie, buy it in a physical format – a DVD or Blu-ray disc. And if you don't, rent and stream it. 

    So this serves as a good reminder that no matter which platform you purchase your digital content on, it is worthwhile to download a local copy to storage and not just count on having it available and retained in the distributors system.  Reminds me that I should probably see about getting more space on my NAS at home.  xD

  9. Biostar has released their newest all-in-one Cryptomining rig, the Biostar iMiner (because everything needs an 'i' in front) A578XD for about $3500.00 USD.  This machine features 8 RX 570s that are attached to their TB250-BTC D+ motherboard, thus eliminating the need for riser cards and fitting everything into a single, rack mountable case.  In addition, this is running a Celeron G3930 with 4 GB of RAM, a 120 GB SSD and a 1600 Watt PSU at 88+% efficiency.  Biostar is claiming that this will have an Ethereum Hashrate of ~220 MH/s.  They will also be releasing two additional pre-built miners in the future, one with 6 RX 570s and the other with 12 RX 560s.  

     

    https://www.anandtech.com/show/13348/biostars-iminer-a578x8d-crypto-mining-machine-now-available

     

    Quote

    Just in time for this week's dive in cryptocurrency prices, Biostar has started selling its specially designed all-in-one rig for mining. The iMiner A578X8D is a complete black box crypto mining solution for eight GPUs, and notably does not use any riser cards. As a result, the fully-integrated miner is touted as being extra-durable to ensure stable 24/7 operation and an equally stable hash rate.

     

    The Biostar iMiner A578X8D is based on the company’s TB250-BTC D+ motherboard featuring Intel’s Celeron G3930 processor (two cores, 2.9 GHz, 51 W TDP), 4 GB of DDR4-2400 memory, and a 120 GB SSD. The system is equipped with eight AMD Radeon RX 570 graphics cards and a 1600 W PSU to provide these GPUs a stable supply of power. The crypto mining rig supports ETH, ETC, XMR, and ZEC currencies out of the box, which greatly simplifies its deployment. According to the manufacturer, one iMiner A578X8D can deliver ETH hash rate of 220 MH/s (+/- 5%).

     

    The mining rig is outfitted with seven fans to ensure sufficient cooling. In addition, the TB250-BTC D+ motherboard has a PCIe slot state detection that can check the state of each GPU and discover whether everything works properly. If the iMiner detects an error, it automatically sends an email notification to enable remote management of the rig.

     

    The all-in-one mining farm is now available from Newegg for $3,499. Later on the company is expected to start selling other AIO mining rigs that will pack six and 12 GPUs, thus offering a bit lower and higher performance.

    With the recent downtrend in Crypto prices, I'm not sure this would even be profitable (unless you get extremely cheap or free electricity), although it may be good if the Crypto markets take off again.  Of course, the individual components cost themselves are probably several hundred less than this as a pre-built system.  Maybe Biostar will have frequent updates that increase the different types of Crypto it can mine as the individual cryptos get too complicated for GPU based mining.

  10. An article over on PCWorld looks at why you may or may not be able to easily use your third party USB-C headset or dongles with your new phone.  A big part of this problem arises from whether the phone has/is using a built in DAC or not.  What this leads to is that some dongles will work across a range of devices while others won't and the same with the headset.  Another issue comes from a lack of uniformity across devices.    

     

    https://www.pcworld.com/article/3284186/mobile/bring-back-the-headphone-jack-why-usb-c-audio-still-doesnt-work.html

    Quote

    Smartphone makers, it’s time to have that come-to-Apple moment, where we tell you that it’s time to put the 3.5mm analog headset jack back on the phone—at least until you get your USB-C audio act in order. After plugging in a fistful of USB-C dongles and USB-C headsets into a stack of USB-C phones, I’ve discovered that it’s a mess, especially for third-party headsets. Here's why.

     

    The first problem is lack of basic compatibility. For example, if you take the USB-C dongle that came with a Motorola Z2 Force or Sony Xperia XZ2, it won't work with a Google Pixel 2 XL, Samsung Galaxy S8, or OnePlus 6.

     

    The USB-C dongle that comes with the Pixel 2 XL though, will work across all of those phones, as does the USB-C headset that Huawei includes with its P20 Pro.

    ...

    These incompatibilities all seem to come down to how each phone maker has implemented its phone, USB-C dongle, and bundled USB-C headset. The key component is whether that headset has a DAC, or digital-to-analog converter. As its name implies, the DAC converts the digital audio into analog audio.

     

    The vast majority of bundled dongles I tried—from Lenovo, Sony, and Huawei—do not include DACs. Instead they rely on DACs inside the phone to convert the signal to analog before pumping it out to the headset via USB-C. These phones essentially treat the USB-C port like a USB-C-shaped 3.5mm jack, and it’s probably not unfair to call these analog USB-C dongles.

     

    The exception I’ve seen so far is the USB-C dongle bundled with Google’s Pixel 2 XL. It includes a DAC to read the digital signal from the phone. Because it’s essentially a USB Audio Accessory (a class of USB audio device that should work on all things USB), every phone I tried it on, as well as a few laptops, largely work the way you expect them to.  We can consider these digital USB-C dongles.

    ...

    The frustrating part of it all is the lack of uniformity and logic here. Why won’t an analog USB-C dongle work on a Pixel 2XL or OnePlus 6 when both phones (or their United States versions, anyway) are based on Qualcomm’s Snapdragon 835 and Snapdragon 845 SOCs, which both seem to include the company's Aqstic DAC? 

     

    In Google’s defense, company officials told us it was just following the rules. Although analog audio over USB-C can be done, it’s not actually part of the spec for headsets. Sure, it might be convenient, but it’s not required, sorry!

     

    To make it even more maddening, several phones we tried with the Pixel 2 XL digital USB-C dongle did work, but not until you switched on USB storage in the OS. Yeah, totally intuitive. Better still, you have to do this every time you insert the headset or dongle to make it work.

    ...

    The exceptions we ran into—and I’m sure there are more—include Google’s Pixel 2 XL and the new Razer Phone. Neither include analog pass-through at all. While Google points you to a website where you can buy a digital accessory (Thanks Google!), the Razer Phone doesn’t even bothering to tell you why you’re not getting any audio.

     

    The problem is only going to multiply as people upgrade and amass a drawer full of USB-C dongles and headsets. Let’s face it: You may give away or sell the phone, but you'll likely keep your old headsets and dongles.

     

    While I can understand the want to consolidate the number of ports on a device for issues such as IP rating and convenience.  It either needs to be implemented in a standardized fashion or just bring back the 3.5 mm port.  

  11. 2 hours ago, The Benjamins said:

    Updated Post, WCCF has the slide deck, but can't find another source. Videocardz leaked some of the slides a few days ago.

    I think some of those slides are on the Guru3D site...  Would be nice to know if the Athlon truly will only have 4 PCIe lanes available for graphics as that will be a limit on building an ultra low end machine and upgrading graphics later on.

     

    https://www.guru3d.com/news-story/55-usd-amd-athlon-pro-200ge-shows-up-in-slides.html

     

  12. On 9/4/2018 at 6:50 PM, VegetableStu said:

    if huawei wants to do that, at least leave in a "drain my battery" optional mode that throws all efficiencies away ._.

    Well the good news for you is that according to Huawei's Official statement that is floating around, they are planning to provide users with access to the 'Performance Mode' meaning you too can enable battery draining and overheating mode on your phone.  xD

     

    https://www.theregister.co.uk/2018/09/05/huawei_on_smartphone_benchmarks/

    Quote

    In a statement, Huawei told us today:

    Quote

    Huawei always prioritizes the user experience rather than pursuing high benchmark scores – especially since there isn’t a direct connection between smartphone benchmarks and user experiences. Huawei smartphones use advanced technologies such as AI to optimize the performance of hardware, including the CPU, GPU and NPU.

     

    When someone launches a photography app or plays a graphically-intensive game, Huawei’s intelligent software creates a smooth and stable user experience by applying the full capabilities of the hardware, while simultaneously managing the device’s temperature and power efficiency. For applications that aren’t as power intensive like browsing the web, it will only allocate the resources necessary to deliver the performance that’s needed.

     

    In normal benchmarking scenarios, once Huawei’s software recognizes a benchmarking application, it intelligently adapts to 'Performance Mode' and delivers optimum performance. Huawei is planning to provide users with access to 'Performance Mode' so they can use the maximum power of their device when they need to.

     

    Huawei – as the industry leader – is willing to work with partners to find the best benchmarking standards that can accurately evaluate the user experience.

     

    Still seems kind of weird double talk with the whole 'we prioritize user experience over benchmark scores however we enable our performance mode when you benchmark'. 

  13. 28 minutes ago, mynameisjuan said:

    But who the F cares about phone benchmarks anyway. Real world use its all that matters.

     

    11 minutes ago, huilun02 said:

    This is unacceptable because majority of people who buy flagship phones just want to compare benchmark scores all the time

    Browsers and social media apps need cutting edge performance else they just crash and burn

    While I agree for the most part that real world use is what matters, the big issue is that the benchmarks are supposed to provide a common ground of comparison & correlation between devices and how the real world, day to day performance is.  

  14. Interesting article over on Anandtech looks into Huawei enabling higher power limits and thermal headroom on the SoCs for benchmarking tools.  Essentially this allows for Huawei to score higher in benchmark applications than you would expect with this 'high-performance' mode turned off.  When Anandtech questioned Huawei about this at IFA, their response was fairly much a confirmation and stating that they do this to stay competitive with other companies in China that do this to get high scores.  

     

    https://www.anandtech.com/show/13318/huawei-benchmark-cheating-headache

    Quote

    Does anyone remember our articles regarding unscrupulous benchmark behavior back in 2013? At the time we called the industry out on the fact that most vendors were increasing thermal and power limits to boost their scores in common benchmark software. Fast forward to 2018, and it is happening again.

    ...

    When we exposed one vendor, it led to a cascade of discussions and a few more articles investigating more vendor involved in the practice, and then even Futuremark delisting several devices from their benchmark database. Scandal was high on the agenda, and the results were bad for both companies and end users: devices found cheating were tarnishing the brand, and consumers could not take any benchmark data as valid from that company. Even reviewers were misled. It was a deep rabbit hole that should not have been approached – how could a reviewer or customer trust what number was coming out of the phone if it was not in a standard ‘mode’?

     

    So thankfully, ever since then, vendors have backed off quite a bit on the practice. Since 2013, for several years it would appear that a significant proportion of devices on the market are behaving within expected parameters. There are some minor exceptions, mostly from Chinese vendors, although this comes in several flavors. Meizu has a reasonable attitude to this, as when a benchmark is launched the device puts up a prompt to confirm entering a benchmark power mode, so at least they’re open and transparent about it. Some other phones have ‘Game Modes’ as well, which either focus on raw performance, or extended battery life.

     

    So today we are publishing two front page pieces. This one is a sister article to our piece addressing Huawei’s new GPU Turbo, and while it makes overzealous marketing claims, the technology is sound. Through the testing for that article, we actually stumbled upon this issue, completely orthogonal to GPU turbo, which needs to be published. We also wanted to address something that Andrei has come across while spending more time with this year’s devices, including the newly released Honor Play.

    ...

    Looking back at it now after some re-testing, it seems quite blatant as to what Huawei and seemingly Honor had been doing: the newer devices come with a benchmark detection mechanism that enables a much higher power limit for the SoC with far more generous thermal headroom. Ultimately, on certain whitelisted applications, the device performs super high compared to what a user might expect from other similar non-whitelisted titles. This consumes power, pushes the efficiency of the unit down, and reduces battery life.

    ...

    As usual with investigations like this, we offered Huawei an opportunity to respond. We met with Dr. Wang Chenglu, President of Software at Huawei’s Consumer Business Group, at IFA to discuss this issue, which is purely a software play from Huawei. We covered a number of topics in a non-interview format, which are summarized here.

     

    Dr. Wang asked if these benchmarks are the best way to test smartphones as a whole, as he personally feels that these benchmarks are moving away from real world use. A single benchmark number, stated Huawei’s team, does not show the full experience. We also discussed the validity of the current set of benchmarks, and the need for standardized benchmarks. Dr. Wang expressed his preference for a standardized benchmark that is more like the user experience, and they want to be a part of any movement towards such a benchmark.

    ...

    Huawei stated that they have been working with industry partners for over a year to find the best tests closest to the user experience. They like the fact that for items like call quality, there are standardized real-world tests that measure these features that are recognized throughout the industry, and every company works towards a better objective result. But in the same breath, Dr. Wang also expresses that in relation to gaming benchmarking that ‘others do the same testing, get high scores, and Huawei cannot stay silent’.

     

    He states that it is much better than it used to be, and that Huawei ‘wants to come together with others in China to find the best verification benchmark for user experience’. He also states that ‘in the Android ecosystem, other manufacturers also mislead with their numbers’, citing one specific popular smartphone manufacturer in China as the biggest culprit, and that it is becoming ‘common practice in China’. Huawei wants to open up to consumers, but have trouble when competitors continually post unrealistic scores.

     

    Ultimately Huawei states that they are trying to face off against their major Chinese competition, which they say is difficult when other vendors put their best ‘unrealistic’ score first. They feel that the way forward is standardization on benchmarks, that way it can be a level field, and they want the media to help with that. But in the interim, we can see that Huawei has also been putting its unrealistic scores first too.

    Sorry for posting a large quote, but this feels like the most relevant from the first page of the article and I would encourage people to browse the whole article...  As it is, it definitely raises questions about any benchmark you see for Huawei and Honor within the last year in particular, but also any benchmark on any Chinese phone manufacturer.  

  15. In a case of why you should never challenge someone to try, BitFi has removed the claims that their 'unhackable' cryptocurrency wallet is actually unhackable and discontinued their $250k bug bounty program. The hack that was performed this time essentially allows for a someone to run code on the hardware without the memory being erased, thus allowing the attacker to extract the memory from the RAM and find the Crypto Keys stored on the device.

    https://techcrunch.com/2018/08/30/john-mcafees-unhackable-bitfi-wallet-got-hacked-again/

    Quote

    If the security community could tell you just one thing, it’s that “nothing is unhackable.” Except John McAfee’s  cryptocurrency wallet, which was only unhackable until it wasn’t — twice.

     

    Security researchers have now developed a second attack, which they say can obtain all the stored funds from an unmodified Bitfi wallet. The Android-powered $120 wallet relies on a user-generated secret phrase and a “salt” value — like a phone number — to cryptographically scramble the secret phrase. The idea is that the two unique values ensure that your funds remain secure.

    ...

    The researchers, Saleem Rashid and Ryan Castellucci, uncovered and built the exploits as part of a team of several security researchers calling themselves “THCMKACGASSCO” (after their initials). The two researchers shared them with TechCrunch prior to its release. In the video, Rashid is shown setting a secret phrase and salt, and running a local exploit to extract the keys from the device.

     

    Rashid told TechCrunch that the keys are stored in the memory longer than Bitfi claims, allowing their combined exploits to run code on the hardware without erasing the memory. From there, an attacker can extract the memory and find the keys. The exploit takes less than two minutes to run, Rashid said.

    ...

    Within an hour of the researchers posting the video, Bitfi said in a tweeted statement that it has “hired an experienced security manager, who is confirming vulnerabilities that have been identified by researchers.”

     

    “Effective immediately, we are closing the current bounty programs which have caused understandable anger and frustration among researchers,” it added.

     

    The statement also said it will no longer use the “unhackable” claim on its website.

     

    I guess the overall lesson is that you shouldn't try to assert that your solution is completely foolproof and unhackable...  There is always someone out there that will find a way to get in.  Would also like to know if they were actually able to claim the $250k bounty.  Seems like they've earned it...

     

    Bleeping Computers article: https://www.bleepingcomputer.com/news/security/bitfi-wallet-is-vulnerable-no-bounty-no-unhackable/

     

  16. Lenovo will be implementing the Snapdragon 850 on their Yoga C630 machine with Windows S.  The 850 is supposed to provide higher performance than the first generation machines due to SoC improvements and OS optimizations within Windows.  As for the system specs, it will supposedly support over a 25 hour battery life on a single charge (impressive if it works), come with a 13.3-inch full HD IPS display with multi-touch, be about 12.5 mm thick and weigh about 1.2 Kg (2.7 pounds).  The system will feature up to 256 GB of UFC 2.1 storage and up to 8 GB of LPDDR4X with two USB Type-C, and all the other little goodies (including an actual 3.5 mm audio jack).  Starting prices is supposed to be around $849.99 and should start sales in the US in November.  

     

    https://www.anandtech.com/show/13309/lenovo-yoga-c630-snapdragon-850-windows

    Quote

    Lenovo on Thursday introduced the world’s first laptop based on Qualcomm’s Snapdragon 850 SoC. The Yoga C630 promises to deliver a considerably higher performance than the first-generation Windows-on-Snapdragon machines because of SoC improvements as well as optimizations made to the OS. Lenovo says that the Yoga C630 can work for over 25 hours on one charge, thus beating every other convertible PC available today.

    ...

    As noted above, the Lenovo Yoga C630 is based on the Qualcomm Snapdragon 850 SoC featuring eight cores and Adreno 630 GPU. The chip is accompanied by 4 or 8 GB LPDDR4X memory as well as 128 GB or 256 GB of solid-state storage featuring a UFC 2.1 interface. As for wireless connectivity, the convertible laptop naturally has an integrated Snapdragon X20 LTE modem that supports up to 1.2 Gbps speeds over appropriate networks as well as a 802.11ac Wi-Fi controller that also supports Bluetooth 5. In addition, the system has two USB Type-C ports, a fingerprint reader, a webcam, stereo speakers, a microphone, and an audio jack for headsets.

     

    Qualcomm itself promises that its Snapdragon 850 offers a 30% higher performance, a 20% longer battery life, and a 20% higher Gigabit LTE speeds when networks permit. That said, it is more than reasonable to expect systems based on the S850 to be faster than notebooks powered by the S835 right out of the box. Meanwhile, there are other important factors that makes Arm-powered Windows 10 systems more attractive in general: Microsoft has re-optimized its Edge browser for the WoS (Windows on Snapdragon) device, whereas Qualcomm has implemented a 64-bit SDK for developers looking to optimize their code for the WoS. Assuming that software makers are interested in the platform, they will release optimized versions of their programs in the coming months or quarters.

    ...

    Obviously, software compatibility and performance in applications that run via emulation might still be a question, but Qualcomm and Microsoft are working on issues and all companies involved state that the market these devices are aiming for use UWP programs anyway.

     

    Other than this being limited to Windows S, it looks like a nice little light and portable laptop replacement.  Maybe just something nice to have for basic Office tasks on the go with some video streaming and browsing...

  17. 6 minutes ago, bimmerman said:

    If it gets better battery life in everyday tasks, I'm sold. I just want solid battery life with good performance in email/messaging/browsing/youtube.

    Well, it should get about 32% better power efficiency on the CPU side and about 178% better GPU power efficiency over their last chip (at least if you trust their slides)...  I would hope that would and an hour or two to the battery life during the day.

  18. 4 minutes ago, GoodBytes said:

    But seriously. A nice performance bump 30% is actually a bit more than what I expected. I expected a 10-20% boost due to the focus on Ray Tracing. Now we have to see power draw, official drivers, and all that, but so far so good.

    I was thinking that the bare minimum increase would be about 20% for this generation just based on the general CUDA count increase (4352 vs 3584)...  Good to see that it may actually be a bit higher than that, but still not looking at it as an upgrade for this generation.  Maybe in 2 to 3 years when they have their next generation being released and some of the specialized features become more mainstream.

  19. So as IFA 2018 has been going on, one thing has become apparent in that TV manufacturers feel that your puny 4k resolution is not good enough and you need to be upgrading to 8k.  Samsung, LG and AU Optronics at IFA 2018 have all shown off new 8k resolution TVs with screen sizes of 85-inches or greater. 

     

    In the case of LG, it is in the form of an OLED panel that was previously only a prototype being displayed at CES.   

    https://www.digitaltrends.com/home-theater/lg-88-inch-8k-oled-tv-introduced-ifa2018/

    Quote

    Behind closed doors at CES 2018 in January, we saw an 88-inch 8K OLED display prototype from LG Display, the South Korean electronics company’s business-to-business arm. The massive screen wasn’t available for show attendees to view — it wasn’t even TV at that point. With today’s announcement, we learn LG is one step closer to making such an advanced TV available for purchase most likely in 2019 — with a price tag to match its size.

    Samsung's 8K display is the QLED Q900FN, which will actually be available for purchase as opposed to the Q9S demoed at CES.  This will include the same resolution upscaling as was demoed at CES as well as 4000 nit brightness and HDR10+ support.

    https://www.engadget.com/2018/08/29/samsung-8k-qled-tv/

    Quote

    Back at CES, Samsung gave us a glimpse at the Q9S 8K TV, which notably used artificial intelligence to upscale content to 8K. That was cool, but it was just a concept. During its IFA presentation today, the company unveiled its first 8K QLED TV that'll actually be available for purchase: The Q900FN. It loses the easel-like design of the concept for a more traditional stand, but it packs in many of the same features, including a real 8K resolution and AI upscaling. Samsung claims the set will also feature a 4,000 nit peak brightness, as well as support for the HDR10+ standard, which can optimize content on the fly.

    AU Optronics is pushing out a Quantum Dot LED 8K display with a full 120 Hz refresh rate and 1200 nit peak brightness.

    https://hothardware.com/news/auo-85-inch-8k-120hz-hdr-tv

    Quote

    4K panels have pretty much taken over the big-screen television market, and 4K PC gaming was brought to the forefront thanks in part to the GeForce GTX 10 series over two years ago. But what about 8K? It will still be many years before the leap to 8K become mainstream, but that isn't stopping AU Optronics (AUO) from whetting out appetites with a gigantic 85-inch 8K television -- we're talking a 7690x4320 resolution here. Oh, and did we mention that it has a 120Hz refresh rate?
     

    We don't think that we've seen so much "win" in one place, but here we are. The television features a High Dynamic Resolution (HDR) panel that uses Quantum Dot technology and features peak brightness of 1200 nits (HDR10). AUO even mentions that color saturation cranks up to over NTSC 110%. 
     

    I'm severely doubting that when any of these release within the next year or so that they will have anywhere near a reasonable price and there is no real content out there to drive on these but maybe it will help to encourage the decreasing prices on 4k displays to a degree that a decent large format 4k TV or monitor isn't out of reach...  Might actually have a reason to upgrade my TV at that time. xD

×