Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Nystemy

Member
  • Content Count

    145
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About Nystemy

  • Title
    Member
  • Birthday September 5

Profile Information

  • Location
    Sweden, just north of most of Europe.
  • Interests
    Tech but primarily electronics.

System

  • CPU
    Xeon E5-1650v3
  • Motherboard
    AsRock X99 WS-E
  • RAM
    Crucial 4x 16GB DDR4 2666 MHz (Might get 4 more if I run out of RAM again...)
  • GPU
    Giggabyte GTX 660 x 2 (not running SLI, I just have too many screens, thinking of upgrading, anyone got a pair of GTX960?)
  • Case
    Some old cooler master case that I have no clue how I even got it...
  • Storage
    The usual mix. 512 GB SSD + a handful of TBs on HDDs
  • PSU
    Corsair CX750M
  • Display(s)
    Samsung SyncMaster 2494HS + SyncMaster 223BW + Envision (19" 4:3 aspect ratio) + Cintiq 13HD for the arts....
  • Cooling
    Noctua NH-U9S (Larger wouldn't fit in the case...)
  • Keyboard
    Logitech G110 Nordic version
  • Mouse
    Logitech M500

Recent Profile Visitors

613 profile views
  1. An SSD like this would make sense for density oriented applications. But competing with hard drives is hard since the main selling point of a HDD is that it is cheap per GB and can be accessed within a few ms. Tape drives on the other hand are the true "low cost high density" format. 100 TB of LTO tape is a couple hundred USD, and potentially smaller than this 100TB SSD. Downside is their access time that can be in the many minutes to hours. (Just reading out 1 track from the tape takes over half an hour...) Though, for 40 grand, I am suspicious to if one couldn't just use M.2 drives and get reasonably close in terms of density, especially when we consider price per GB.
  2. "miniLED full array local dimming" is sure a slew of marketing terms that mostly leads to a bloomy mess... Local dimming I can get behind, but the "full array" part makes it sound like each pixel has its own light source. Though, "local dimming" could also be done with edge lighting, but then just call it "2D local dimming" instead... (And I don't think LTT is "wrong" in using the industry term, I think the whole display industry has just picked a stupid term...) Other than that, this seems like a competent laptop to be fair. A bit on the expensive side, separate mic and headphone jacks is a very nice addition, as well the rather large abundance of ports in general. Though, the SD card slot should have been full sized... And storage internal could have been more nicely executed. (Like why do we need to take the computer to bits just to add more storage?!...)
  3. Building a case out of cardboard is an interesting idea for a challenge. Though, 3 hours isn't a lot of time to be fair... The decision to forbid tape is though something that I consider novel and honestly something that would force one to be a bit more creative. Over all, I would say that Linus and Jake had the better idea of how to make the case. Now the question is, if given more time, and an aluminium sheet or two (and ability to source other materials like mesh/glass), would we be able to see a competent case be made? Preferably with enough time for it to not be a rush, but rather a full on design challenge. Where each side have time to first of all plan, and then also build. Maybe fly in Steve, to make it a GN vs LTT themed challenge. (Can have the design phase of the challenge done before flying them in, lower hotel costs per say. (Though I guess Steve would just take the opportunity to go to Whistler again regardless.... So maybe worth doing next year.)) edit: Now I am also curious to how the winning case here would perform if Steve did a thermal test on it... It's a cardboard origami case, so shouldn't be too hard to ship would it?
  4. Well, with a VM you license the windows installation to the VM. The OS doesn't actually see the real hardware, so as far as windows is concerned, it always runs in the same environment, despite it likely starting on a new set of hardware each time it "boots". So yes you will need 1 license per subscribed user. But a windows license can be reused if the user terminates their subscription, since we can just "factory reset" that VM and hand it to the next user in line. And yes, a server running 3 VMs will need to have 3 licenses in this case. Though, the license is for the VM, not the actual hardware, so all the currently inactive VMs out in the storage will also have licenses. (one can think of a VM a bit like a laptop, we can put it on a shelf and not use it, or drag it out onto our desk and use it, but in this case our desk is a server in a rack, and our shelf is a storage array somewhere else in our data center.) But the cost of the windows license isn't a major deal to fiddle with to be fair, and Shadow can likely get some bulk deal from Microsoft, or other B2B cost structure to make this largely a non issue to a degree. And to clear it up a bit, it seems to be a Gigabyte MU71-SU0 motherboard, so its is running a Xeon W something or other. And the board does have the frankly bizarre setup where 2 of the channels have 2 dim slots while the other 4 channels have only 1...
  5. I don't know the exact CPU/GPU they use. But both Intel and AMD do offer rather high core clock speed server CPUs, though I would be suspecting it being an EPYC due to the board having 8 dim slots. (EPYC has 8 memory channels, Intel's LGA3647 only has 6. So it would be weird for the board to have 2 slots for only 2 of the channels, I would expect 6 or 12 slots, not 8. Unless they use those fancy storage dims, but Linus explicitly mentioned both 64 GB and that there were 8 GB modules, and that the server doesn't have any storage...) But in regards to hosting the actual user data/settings. A simple method of doing this is to just make a virtual machine, store that whole machine on a storage array and simply tell a node to fire it up when the user wants to log in. The storage for the user data and the whole virtual machine is just somewhere else in the data center, linked to the computing node by typically an FC HBA card, or using iSCSI over the normal network. In short, we just run a windows license on a VM, and run that VM on the "nearest" available hardware. As long as all computing nodes are "identical", then Windows won't have a clue. And if a user decides to terminate their account, one simply wipes the data and do a factory reset of the OS, and then just carry it on to the next user waiting in line. (of course one will have a whole bunch of VMs set up ahead of time so that new users can just join.) Also, streaming inputs over a network interface is actually a bit less trivial than it might seem at first. Typically one would use a UDP network link, so that one isn't chugging due to prior packets being "slower"/lost. And then typically resend each command for a few consecutive packages, in case a UDP package gets lost. (UDP unlike TCP won't retry sending lost packages, not that it even knows that the package got lost, nor even preserve package order, or do handshakes at all... UDP is practically just chucking packages out a window and hoping someone is there to catch it.) So to get it to just "work" flawlessly is actually a bit impressive, since a lot of developers implements things rather poorly in this regard...
  6. The 12V power input is something that has for a long time boggled my mind... Like this is done all over the place. Adding in a decent buck converter on the power distribution board could make it able to run on a more reasonable 20-40 volt, making any voltage drops between the actual power supply and the system components less problematic. And such a buck converter isn't really something that is going to take up lots of room for that matter. (and this server has plenty of room, so that isn't an issue. And power efficiency wise, the overall system could actually get more efficient, not to mention cost effective, due to not needing as much copper for carrying the current.) Otherwise the system seems rather decent, but I do wonder if a regular blade center would have been a nicer solution, due to including everything for both networking and storage in one box. I know that HP's blade centers even have support for off the shelf graphics cards. (though, ones meant to run in a server is still recommended due to how the cooling works.) Downside with blade centers is their higher cost. (since blade centers are built for density, not really price to performance.) How it compares to normal servers is also a curious question, though the method they have used to put this together is somewhat nice. Been looking at one of those Gigabyte board myself, though for workstation use instead...
  7. 10µm isn't a lot of thickness to work with, especially considering that the only gaps it can fill is in proportion to how much one can compress it. So in reality, it might not fill out much gaps at all... Though, it could be interesting to see it used on heatsink fins themselves. Since neither aluminium nor copper is all that great at carrying heat over a large distance if they are just a thin sheet. But industrial solutions can be rather interesting at times. Though usually a bit "inappropriate" for areas that they weren't developed for. But it were worth a shot. Though, I have been curious to what performance one would get if one properly lapped a CPU and heatsink and then wringed them together as if they are a pair of gauge blocks. (lapping for this to work requires exceptionally small surface variations, and also an exceptional flatness over the surfaces. (hand lapping tends to make things slightly convex.))
  8. Ah, that mentioning of "fun" at the ending made me think of Dwarf's Fortress. Since in that game, one got to be careful, otherwise one might risk having "fun". (sometimes due to ones own mistake, like flooding the whole fortress with water when figuring out how to run a pump from an endless power source... or digging into the caverns bellow and getting invaded by forgotten beasts.) Though, sanding down a chip is an interesting thing to do, and does have some rather major performance improvements to be fair.
  9. There is a scope of applications where the raspberry Pi and its cameras as well are often deployed, and that is industry. Its a small, relatively "high performance", network equipped, computer with a fair assortment of GPIO, and it has support for a camera too. It's like the dream Programmable Logic Controller (PLC) if you give it a decent case and an expansion board and it suddenly competes with PLCs that can cost into the hundreds of dollars, and then they typically still don't have networking as standard, or even a a 100 MB worth of RAM, or SD card support, or networking.... Downside with the Raspberry Pi compared to a PLC is that a Raspberry Pi doesn't come in a case, it doesn't "just work" straight out of the box, nor does it use 12-48 volt power... It requires a lot more work to get it up and running. (Though most of that work is software wrangling, something most industry technicians honestly don't want to deal with.) (though, one can just buy a kit that includes nearly everything one needs to turn a RPi into a proper PLC, but still, its extra hassle and it lacks the name brand, familiarity and most importantly the certifications of a PLC provided by an actual PLC vendor.)
  10. This is impressive, and honestly makes one wonder what people with actual computing resources could do. Using a multi GPU setup might "hurt" performance, but the additional memory might hand back quality. I have not tried myself, so I don't know. Reminds me about the video Tom Scott did 2 years ago about Future shock and deep fakes back then.
  11. No worries. I corrected mine as well.
  12. Yes, its a rarely used way to describe temperature. And honestly, its really hard to measure such small temperature differences in practice. Since thermal resistance, mass and gradients becomes a major impact to all measurements. And even the sense current used for making the measurement can greatly skew the answers by a few mC. Just think of the troubles measuring µC or even nC! Or all the people thinking one is talking about Coulombs. (1 volt Coulomb is 1 Farad. And has nothing to do with temperature btw.)
  13. Main advantage of trimming before the ADC is that we can get better resolution in our measurement. Since we don't need to waste ADC resolution on trimming. Now, if one also were to toss in an op-amp to do both amplification and a bit of level shifting, then they could get a lot more resolution out of the termistors. Considering that the ADC used were 12 bit, and the temperature range they measured were around 25-40 C (15 C delta) Then theoretically, the max resolution should be 3.66 mC. (0.00366 C per step.) Though, this is "optimal" and noise would limit us before this, nor did they go to this overkill degree.
  14. @ColinLTT Last time I honestly put forth examples indicating that stacking radiators like this isn't optimal. And using some simple baffles, and giving each rad its own airflow is a lot more efficient. And there is ample room for it even in a rather slim 2U server... Still not tested though. Not that its hard, just use some cardboard, some duct tape... And prove it for yourself. Also, would be nice to get the test data from the video. Or at least for a given row of data points and ambient temperature during that test. From there, one could calculate a rough "efficiency" figure for the radiators at the given airflow. I would also have to say that you can buy I2C temperature measuring chips that have a resolution in the 0.05 degrees C or better, so NTC termistors is kinda old school to be fair. (absolutely nothing wrong with them though.) Edit: Honestly, we could calculate the radiator efficiency by only knowing the inlet and outlet temperature of the two first radiators. We don't even need to know ambient air temperature. As long as we can make the assumption that all radiators have the same efficiency, and air/water flow rate through them as well.
  15. Honestly, having two screens on a laptop like this is a fairly good compromise to be fair. One gets to have a keyboard, touchpad and also two screens without those weird folding screen designs some manufacturers have showcased... That the second screen tilts up also seems like a very useful feature, since better viewing angles is truly an advantage, might also be easier to interact with. Also, time for a "two games at once" tournament at LMG, to see who is the real champion of this frankly odd combination. (Channel Super Fun maybe?)
×