Jump to content

Thread for LTT Labs Test Suggestions

LMGcommunity
46 minutes ago, r0ckstar said:

Any chance the labs can test a bunch of baby monitors and give us a true review. Search online can't even return a true review. I've tryed so many and they are all garbage. Motorola was just basic with no features but kinda worked, vtech had nice features but couldn't stay connected to a wifi AP for the life of it even if you taped it to the AP.

 

Just seems like a market that is filled with rebranded junk or super old technology.

 

Maybe even see if Jake or someone can build and do a video of making your own using a IP camera and home assistant or something like that

are you looking for recommendations? or is it a general wish for a better review landscape?

mITX is awesome! I regret nothing (apart from when picking parts or have to do maintainance *cough*cough*)

Link to comment
Share on other sites

Link to post
Share on other sites

I'm gonna guess that it's just not going to happen, sadly.  Even if you make the argument that so many are web-based, they're still in a tight niche.

Current Personal Rig

CPU: Ryzen 7 3700X w/ Corsair H60 AIO   MB: ASRock B450 Steel Legend ATX  RAM: 32 GB Corsair Vengeance RGB Pro 3600 (2x16)  GPU: EVGA GeForce RTX 3060 XC Gaming  PSU: EVGA 750GQ Semi-Modular  Storage: 500 GB WD Black M.2 NVMe + 1 TB 2.5" SSD  WiFi: TP-Link Archer TX3000E  Keyboard: Corsair K65 Mini  Mouse: Logitech G502 Wired  Monitor: Gigabyte G27FC 27"

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/3/2023 at 2:45 PM, r0ckstar said:

Any chance the labs can test a bunch of baby monitors and give us a true review. Search online can't even return a true review. I've tryed so many and they are all garbage. Motorola was just basic with no features but kinda worked, vtech had nice features but couldn't stay connected to a wifi AP for the life of it even if you taped it to the AP.

 

Just seems like a market that is filled with rebranded junk or super old technology.

 

Maybe even see if Jake or someone can build and do a video of making your own using a IP camera and home assistant or something like that

Merged to Labs suggestions thread.

^^^^ That's my post ^^^^
<-- This is me --- That's your scrollbar -->
vvvv Who's there? vvvv

Link to comment
Share on other sites

Link to post
Share on other sites

Simple request with a useful academic result:

 

Test a down configured 13900k vs 13600k (being the 13900k is configured for 6p/8e/20t).

 

Intel for some reason disables 3MB of L3 cache per p core and per e core cluster in their binning scheme. Although, if you disable E cores or P cores, this doesn't lower the L3 cache reported in Windows. 13600k having 24MB, 13900k having 36MB.

 

If clock speeds are fixed and the 13900k is down configured to a 13600k, how much does the extra 12MB of L3 cache matter?

 

Confirmed to show in Windows to retain its 36MB of L3 by @Kilrah in this thread:

 

 

Ryzen 7950x3D PBO +200MHz / -15mV curve CPPC in 'prefer cache'

RTX 4090 @133%/+230/+1000

Builder/Enthusiast/Overclocker since 2012  //  Professional since 2017

Link to comment
Share on other sites

Link to post
Share on other sites

Hey,

 

I wanted to throw in a suggestion for an LTT labs video (series!):

 

* Running FCC tests on computer PCs and servers.

 

We've recently started to think about this for our product and we've found some interesting results.

A theme that has come up over and over is "test". But how well do the "tests" really apply for "cases with glass side panels".

 

Those "glass panel" cases should be terrible for EMI radiation.

 

It seems that LTT labs finally has all the equipment that one would need for these tests.

 

I also think it falls in line well with all the computer power supply equipment equipment you got.

 

Excited to see some videos.

 

Mark

Link to comment
Share on other sites

Link to post
Share on other sites

What about shut the labs down for good if you are too lazy to make the tests properly and basically lie?

That would be great.. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I would Love to See the Lab do some statistical evaluation of the data they obtain. Saying that everything ist within margin of error becomes much more credible, when you actually know and show the errors... A published methodology for all tests also makes it more transparent for the audience. 

Would Love to See that! You could even See when Something really is significantly off and stuff like that. You could compare methodologies of different tests. The opportunities are endless 🙂

It would also give you more scientific credibility, which ist nice as the "lab" you are...

Link to comment
Share on other sites

Link to post
Share on other sites

The lab needs to be able to double blind tests with their audio gear - IEMs. This is just monster cables all over. These new KZs don't have the tamber and blah blah. You know you're listening on KZs and not your Shure $1000 IEMs. You also need to introduce a 50 band EQ that we had in the 80s, so that it's not the bass response that they are attuned to. I want to know if the hardware is quality, not the sound profile.

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/15/2023 at 6:25 PM, ValelaV said:

I would Love to See the Lab do some statistical evaluation of the data they obtain. Saying that everything ist within margin of error becomes much more credible, when you actually know and show the errors... A published methodology for all tests also makes it more transparent for the audience. 

Would Love to See that! You could even See when Something really is significantly off and stuff like that. You could compare methodologies of different tests. The opportunities are endless 🙂

It would also give you more scientific credibility, which ist nice as the "lab" you are...

Statistical evaluation requires having multiples of each product or the one that they are testing may just be an anomaly. Unless the manufacturer are going to send multiples of each product over, and we can't be assured that they're not cherry picked then what you're saying is useless!

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/15/2023 at 11:05 AM, terozzz said:

What about shut the labs down for good if you are too lazy to make the tests properly and basically lie?

That would be great.. 

 

Stop listening to bullshit and make up your own mind. I don't see better testing anywhere else on the internet. Lemming!

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, jasonwalls said:

Stop listening to bullshit and make up your own mind. I don't see better testing anywhere else on the internet. Lemming!

I see and so does many many other people. These "claims" were NOT bs. As an audio-engineer i do see a lot of room to improve on the testing methods and specially as journalist, i see HUGE amount of room to grow on journalistic side of thing in LMG. As they claim they are doing also serious journalistic testing (among with the goofing around, why i LOVE linus), it needs to be fact checked. Again and again. 

 But Linus and co did make a video and asked forgiveness. And we do forgive, as they seems to be straight about it. 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, jasonwalls said:

Statistical evaluation requires having multiples of each product or the one that they are testing may just be an anomaly. Unless the manufacturer are going to send multiples of each product over, and we can't be assured that they're not cherry picked then what you're saying is useless!

Valid Point, but then you should never Take your result serious in the Last FPS... Like saying that a 3-5 FPS difference would somehow be significant which it most likely isnt!

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/28/2023 at 2:54 PM, LMGcommunity said:

A Labs site is in the works!

Ok, this Labs site and its ability to make comparisons/recommendations is something that I think could be an absolute game changer, but I can't shake the vision that it can be more than a 1 to 1 comparison site. When I'm looking to build a new gaming PC (or, ugh, productivity workstation) I want to be able to go on a site, similar to PC part picker, spec out my machine and then...get my expected FPS for some games I would typically play. 

 

The issue with this is that it would require the labs team to test literally every single combination of CPU, GPU, RAM, Motherboard, SSD, etc. to get perfect results for this. Enter two things: Machine Learning and  Design of Experiments. 

 

If LTT labs had a database with a large set of data for a variety of benchmarks setups, Machine Learning could be used to predict the FPS of similar systems even if that entire systems components had not been tested together. For instance, if you had a test setup that had all the components of interest to the user browsing the site except the RAM (Corsair), but you had tested that RAM (Corsair) against the RAM (GSkill) that was in the user desired setup in an other similar test setup, the model could quite accurately predict the performance difference it would make (Corsair vs GSkill). 

 

Now I hear you saying "But wait! What if it has some weird interaction with the hardware?" This is a valid criticism, but also one of the huge powers of Machine Learning. When the model that does the predicting is developed there is data held out from the training to validate the models predictive accuracy. So in the above example, you would have the exact FPS data for the user desired system but the model would not be trained with it, and then the model would predict after the training and the model accuracy would be evaluated vs the true result. In this way, when the comparison site is giving FPS numbers, you can make statements like "This exact system has not been tested, but predicted FPS numbers are typically accurate to within 4%" or whatever the number ends up being. Very cool stuff, but from a Machine Learning perspective also very well established.

 

On to Design of Experiments. In essence this an approach to explore the variable space (various components) in an efficient way. Labs does not have infinite time or money, and getting the data needed for the Machine Learning approach above would be a geometrically expanding problem as more components are released. While that remains true, Designed Experiments allow for a scientific and statistical approach to run the fewest tests to get the most information about the set of variables (components). It would allow for an automated strategy to be developed for which test setups to run to get the most useful information to assist with the Machine Learning model. It could even suggest which test setups would "fill in the gaps" most efficiently from an existing dataset.

 

I've been a research scientist for the all of my career, and have transitioned to a Data Scientist a while ago. I have used Design of Experiment for problems very similar to this, and do Machine Learning full time. I would be happy to give more details, answer questions, or if the labs data will be fully open source, just do some work on this myself. Feel free to message me.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I signed up to make a comment regarding this, but posting here as it maybe pertains to labs.  I just want some consolidated product recommendations.  "Labs recommends".

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, jasonwalls said:

Statistical evaluation requires having multiples of each product or the one that they are testing may just be an anomaly. Unless the manufacturer are going to send multiples of each product over, and we can't be assured that they're not cherry picked then what you're saying is useless!

 

4 hours ago, ValelaV said:

Valid Point, but then you should never Take your result serious in the Last FPS... Like saying that a 3-5 FPS difference would somehow be significant which it most likely isnt!

Your Point is wrong, when I think about it. They can give an Error in their data. They dont need other products. Those are two different Things! Of course No one component will Always perform the Same as the next, but that doesnt Take away a propper statistical analyzed Test of the device you have at Hand!

Statistical evaluation is pretty amazing, Most people with a university lan course will be able to perform this. So It really can be done! They arent save from a Bad device, Bit Show in what Range their Testing can be taken seriously.

Maybe you misunderstood what I meant... I want clear measurements, scientific measurements with clear statistical error evaluation

Link to comment
Share on other sites

Link to post
Share on other sites

I would be focusing on getting accredited first.  If you want to run a legit laboratory do what others have done.  You should be ISO 9001 & ISO 17025 accredited.  All of your test equipment should be periodically calibrated by a 17025 laboratory as well.  You should have a quality manager that has experience with ISO certification. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/17/2023 at 5:13 AM, ValelaV said:

 

Your Point is wrong, when I think about it. They can give an Error in their data. They dont need other products. Those are two different Things! Of course No one component will Always perform the Same as the next, but that doesnt Take away a propper statistical analyzed Test of the device you have at Hand!

Statistical evaluation is pretty amazing, Most people with a university lan course will be able to perform this. So It really can be done! They arent save from a Bad device, Bit Show in what Range their Testing can be taken seriously.

Maybe you misunderstood what I meant... I want clear measurements, scientific measurements with clear statistical error evaluation

 

Yeah I'm no longer sure what you're talking about. My response was in relation to your comment that they should do statistical evaluations of the data they obtain. You seem to be saying you want scientific results. However, testing only one device limits the extent to which you can provide statistically robust conclusions and accurately quantify errors. A small sample of one device is not going to provide this. LTT provides a context. This in itself is valuable. You don't seem to understand statistics.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Suggestion: Create a Reference Guide for Review Metrics

Hello, I know that maybe this post should be in the other huge thread that is full of of angry people, but I think this is more a suggestion/request for the Labs team. This is really just a long example about GPU testing because that's what I'm thinking about now, but it can be applied more broadly (I think).

In GPU reviews, testing used to be simpler - just FPS in some games, thermals, power draw. Then we got more advanced metrics like noise levels, min/max/average FPS, 1% lows, frame times and many others will come for sure to even better understand and evaluate a product. For example, I saw a recent GN video explaining a new Intel tool to measure "GPU busy" and driver overhead. They spent ~30 minutes just on what the metric means (I'm sure they could have made a shorter version with less chatting in between slides).

I understand LTT has to target a broad audience so hard concepts/metric often get simplified, have to be omitted or maybe just belong to written form.

But even current metrics like 1% lows aren't that intuitive, you can explain it as the lowest 1% of FPS values, but that's hand-wavy. The lowest 1% FPS values are... what? A range of values expressed as a single value? Well kind-of, you get what I mean, higher is better let's just move on.

 

The underlying concepts of cumulative distribution function and probability density functions are not easy, sure in an academic context this is just offensively simple, but it is my belief that these concepts are not well understood elsewhere. I think that if I go point blank (very mean of myself, I know) asking reviewers online to explain (without handwaving) what a percentile for a probability distribution is, I'd get many "uh... ah...".

I'd not fault many of them (depends on what follows after the "ah...") because while it is quickly defined in a math context (a simple inequation of the inverse CDF, child's play!), each of those words carry a great amount of "meaning" and, as it often is, an easy math concept is just an elegant and concise way to express a deep and complex idea.

This 1% low is just the best example I have to pitch my request, I hope I've shown what I mean by not an easy concept.

Here is the suggestion/request:

I'd like you to make a reference "guide", somewhat similar to the definitive guide to build a PC, as a reference for every metric that you wish to publish, going into "enough" details. The goal is to educate us viewers so we better understand these concepts instead of relying on vague explanations, to have a good reference to be pointed to or to refer to when needed, to clearly understand what it is that your numbers actually mean. With this reference, you could start publishing even more complex metrics (like the gpu busy example above) since people will understand them. It will raise the quality of discussion even more.

We are already way past the "it goes faster it is just better" with most of what you review, and most of us (I think) had to adapt a long time ago to the idea that there is a need for many metrics to understand GPUs before buying one, it already is a higher quality of discussion than what can be found in other fields.

 

Adding to this, if you could show us how you found the errors you will be correcting in older videos, this may lead to better feedback from the viewers, as we will also know better what to try to look for or have a better way of looking at the data you publish, instead of just staring at it (sure, my fault I sometimes do stare). Even reading a chart is not exactly easy, it is not just a bunch of lines thrown in a rectangle, otherwise there would be no need for someone to comment them.
Maybe a better understanding of the various metrics from the viewers may also lead to better feedback when errors crop up.

 

I wish you the best of luck with all your endeavors. I believe this team will continue improving LMG content regardless. Even though it may not pay wages, I appreciate what you've done over these many years. I hope you will get back on track as soon as possible, as I believe you strive for the goal Gary once stated to "not be questioned" about Labs data.

Link to comment
Share on other sites

Link to post
Share on other sites

can we have tuning videos. PC tuning, invite motherboards XOC teams and have them share what's the best tweaks in general, teams from Asus AMD Intel, you name it. DDR5-8000, 6Ghz, under volting

 

prove to guerilla nexus that their FUD about sponsored content is just that jealousy... show the world you can have collaboration and co promotion that's beneficial to all parties.

Link to comment
Share on other sites

Link to post
Share on other sites

I was wondering if the labs is into hardware research. 

One thing I think they could do with some electrical engineering background is to look into the SD card failure issue with the asus rog ally. 

While it might be caused by heat, it isn't the cards that are dying but something in the device. 

Maybe the controller not working correctly or something. 

I have now seen a few people that had their cards die but work fine in another machine or card reader and different cards work in the ally. 

I think it would be cool if they could do a failure analysis piece on it. 

Link to comment
Share on other sites

Link to post
Share on other sites

starfield test cpu ram scaling

 

13900k

13600k 

from ddr 5600 - 6000 - 6200 - 7000 - 7600 - 8000

 

7800x3d

7700x

from ddr5 5200 - 6000 - 6200 - 7000 - 7600 - 8000

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not sure if this has already been posted, but has labs considered testing for uninterruptible power supplies? Been looking to buy one to hook up to my NAS, and I find that I pretty much have to just trust the manufacturer's statements of their specifications. Would labs be able to get the equipment to test, for example, if these "true sine wave" UPS units really output a proper sine wave? Or say, that their runtime advertisements are correct?

Link to comment
Share on other sites

Link to post
Share on other sites

I recently ran into alot of problem with linunx running on my laptop and I wonder if llt labs could just do a quick test if stock Linux(maybe Ubuntu) runs and all features still work(like GPU, touch and so on)?

 

Link to comment
Share on other sites

Link to post
Share on other sites

With the keyboard and mouse tester operational.

Test how a mechanical keyboard or mouse button feel changes with wear.

 

I noticed that Cherry MX blue loose their tactility within a few years of day to day use. Once had my hands on an old (1990) Cherry G80 with MX black and they felt different compared to new/unused switches.

People never go out of business.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, SmileyTheReal said:

I recently ran into alot of problem with linunx running on my laptop and I wonder if llt labs could just do a quick test if stock Linux(maybe Ubuntu) runs and all features still work(like GPU, touch and so on)?

 

This would be hard for them to test as it really depends on the specific hardware configuration and the software together. 

On my mini pc from awow Linux mint runs great on it. 

But if I try to run proxmox on it the network is very unstable but both run on one of my laptops without issues. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×