Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Nystemy

Member
  • Content Count

    33
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About Nystemy

  • Title
    Member
  • Birthday September 5

Profile Information

  • Location
    Sweden
  • Interests
    Tech but primarily electronics.

System

  • CPU
    I5 3350P (To be updated, but question is to what...)
  • Motherboard
    AsRock Z77 Pro4
  • RAM
    16 GB Kingston + 16 GB TeamGroup, since mixing is perfect....
  • GPU
    Giggabyte GTX 660 x 2 (not running SLI, I just have too many screens)
  • Case
    Some old cooler master case that I have no clue how I even got it...
  • Storage
    The usual mix. 120 GB SSD + a handful of TB on HDD
  • PSU
    I think it is a Corsair, long time since I built it...
  • Display(s)
    Samsung SyncMaster 2494HS + SyncMaster 223BW + Envision (19" 4:3 aspect ratio) + Cintiq 13HD for the arts....
  • Cooling
    Stock as far as the eye can see, and nosy fans for the ears to "enjoy"....
  • Keyboard
    Logitech G110 Nordic version
  • Mouse
    Logitech M500

Recent Profile Visitors

186 profile views
  • 1re

  1. Nystemy

    The WEIRDEST PC Parts we Found on AliExpress

    Alex pulling out the PCI oscilloscope didn't surprise me. Though, @AlexTheGreatish, you should have set up a computer before hand with OS, Drivers, and software to run the card to begin with. But in this fairly lengthy post, I hopefully can provide the needed resources to actually get it running. In terms of Labview, it first of isn't needed to run the oscilloscope card, and Labview has a yearly license fee of only 399$ (or 3 Grand for the full version), and honestly, for such a one off thing, I wouldn't be surprised if National Instruments would sponsor such a license to LMG. (Practically almost free publicity, although maybe not directly at their main market...)) Though as stated, the manufacturer doesn't state that it needs to use labview. Here is a PDF from Keysight (Formally known as Agilent, that Acqiris were part of when they released this product.) http://literature.cdn.keysight.com/litweb/pdf/5989-7121EN.pdf The software suite for the card can be found as a downloadable CD from Keysight at: https://www.keysight.com/main/software.jspx?cc=SE&lc=eng&ckey=1325010&nid=-34655.729904.02&id=1325010&cmpid=zzfindacqiris-software-cd-rom (I haven't tried it myself. So I could be totally off on this judgement...) This should include some basic run of the mill software to use the card as if it were a crude stand alone oscilloscope. (And don't blow it up with the 50 ohm mode, it is a costly beginner mistake with this type of test equipment. (100 V (DC + peak AC <10 kHz) @ 1 Mohm, ±5 V DC (2 W) or 0.5 W RMS @ 50 ohm.)) Though its 150-250 MHz bandwidth, 0.5-1 Gig samples per second and 8 bit resolution makes it fairly decent. That together with 128K to 8M (depending on how it is optioned up from the factory) samples of memory makes it fairly decent for 700 bucks back in the day. And it is surely one of the more obscure "PC building parts" out there. Compared to Linus' slew of adapters that in the end didn't work. (though, Wireless PCIe sounds interesting...) Though, in terms of PCI based test equipment, one could have gone far deeper down the rabbit hole and gone with a PXI card, it is practically PCIe with a different connector, and it supplies more power. (it is used for automated test gear in factories and calibration labs) Then there is also PC104 cards. (The modern ones use the PCI protocol, though their own fancy connector... But one can buy a whole PC104 computer setup for a few hundred bucks over at Ebay. (It is like an overgrown Raspberry pi aimed at industry.)) Otherwise for stuff that can be plugged into a regular computer, one could always go with one of Xilinx's FPGA development boards. (Though, software here is very much following the phrase "do it yourself" so knowing a hardware description language is somewhat needed...)
  2. Nystemy

    Thread for Linus Tech Tips Video Suggestions

    A video in regards to the RED camera's storage solution. What exactly is inside of one of RED's memory cards? How easily would it be to make a redundant storage solution? Having seen the camera teardown and reassembly, it seems like the memory card holder is a module in and off itself, so that made me wonder, how hard would it have been for RED to make a two slot version with redundant storage? Or maybe they have solved that by simply having their memory cards be redundant internally, since that wouldn't after all be hard, and if the camera detects that one half is having a problem, then it would refuse the card. While if one half of the card were to exhibit a flaw post recording, then at least the other half is intact with its copy of the data, thereby always allowing one to read off the contents. Unless both halves were to die. (This is though total speculation on my part. But could explain why all storage issues covered earlier has happened during the mounting of the card to the camera.)
  3. Last time I checked, they had at least 2 of these cameras, so picking one apart for half a year isn't that bad. And in the larger scheme of things, the 40 grand camera body is not all that expensive, after all, the rest of the stuff fitted to it can easily outweigh the cost of the camera. (objectives, sound recorders, screen, handles, batteries, memory cards, etc) Not to mention the cost of the other production equipment and studio space, in the end, a 40 grand camera isn't all that expensive. Secondly, there is a whole LTT video in regards to why they bought these cameras, and yes Brandon does state that they are surely overkill for their needs, but also fulfill their needs very neatly. Then they also have other non RED cameras that are good in their own regard, and if push comes to shove, Linus would have just put it together earlier, or bought one more. After all, it is a business expense and I wouldn't be surprised if they spend 4 grand a year for toilet paper alone, not to mention other trivial expenses that comes with running a media production company with 25+ employees while staring up a video streaming service on the side.
  4. A CCD is a charge-coupled device. A traditional CCD sensor is an array of charged-coupled devices, who's content get switched onto an analog buss matrix, then amplified with an external chip, send it off to an analog to digital converter chip, before we finally get the data in a digital form. A CMOS sensor uses a CCD array as well, has the same analog buss matrix, uses an amplifier that is internal to the chip, and has an internal analog to digital converter, + some glue logic to run the show. Ie a CMOS sensor is a CCD sensor with everything else included on the same chip. So the same cooled CCD advantages applies, though limited by the noise floor of the internal amp/ADC that might not have been designed to have a low enough noise floor for this type of application. But yes, the RED might be using a CMOS sensor, I haven't taken one apart and checked.
  5. In terms of benchmark tests for a camera, I would go for a few low light. Since a CCD sensor's noise is fairly directly proportional to its temperature. (And also to the size of the sensor, but shouldn't change, unless Linus has magic...) So a few easily reproducible light levels within a room. And see if the future water cooled camera has less noise in the image should portray the real advantages of a cooler sensor. Audio wise, getting rid of fan noise is also a positive, but not one that would effect image quality all that much. Also, there is so called Cooled CCD cameras, using peltier coolers to get down to far bellow sub ambient, giving practically noise free images. The youtube channel mikeselectricstuff has a pair of videos on the topic, though more electronics oriented as they tear it down to poke about inside it. (And they practically take fairly noise free images in a non lit room... So cooling a CCD sensor can make for some radical improvements.)
  6. Nystemy

    All this work... for what??

    @jasonkatz424 Here it is: They took a frame from around the 5 minute mark.
  7. Nystemy

    All this work... for what??

    Having watched your networking debacle the other day out of pure entertainment, then seeing this seems like it couldn't be a better continuation. But those SSDs have sure been through a lot throughout the years, I am somewhat surprised that they are still kicking. Though, just a roughly 10% improvement in render speed? Despite a much better CPU? And secondly, can't the program export two or more projects in parallel? Otherwise, why not build a set of render servers? So the editors can pick one that is free, and thereby be able to push out videos faster. Maybe make a fast one for Techlinked (since that is same day delivery. Then the server is free until Techlinked needs it the next day.) Then a set of slower ones for all other render jobs. Maybe make some centralized dispatch software that indicates server status and queue and such. But knowing that this is far from trivial and Floatplane worked on their solution for it for quite some time.... Then maybe it would be a bit on the "overkill" side of things. Interesting non the less.
  8. Manufacturers don't really have much marketing reason to put effort into long term stability/functioning of consumer products. Especially with product that are performance/feature wise obsolete after 2-4 years. Since the long term failure rate isn't going to effect sales, due to the very reason that by the time those issues start appearing, then the manufacturer long since stopped selling it and literally couldn't care less... Is that a good thing? Not really. I generally look for longer then usual warranties. (A warranty is ideally speaking a period of time that failures due to the product and subsequent repairs should be covered by the manufacturer free of charge. So a longer then usual period indicates that the manufacturer is more confident in their quality. (Ie, if it is just the legally mandated 1-2 years, then is it any good?)) Then one can also take the bathtub curve into consideration: https://en.wikipedia.org/wiki/Bathtub_curve
  9. Other then broken panels. Some dead monitors simply have a blown capacitor or two in the power supply. Usually a quick and cheap fix, though.... Then the monitor is usually also a few years old, since the failure is usually heat related. But, I think Alex has manged to figure out a good method of scoring a new monitor for relatively free, with very little risk. Since it does make for an interesting video, it does give LTT a random extra monitor, and Alex literally being the first person to know about it can call bids on it really quickly. Smart move I must say.
  10. To a large degree the latency between NUMA nodes is down to a few factors. The buss clock speed. Ie how many times a second one sends data, if one only does it at 0.5 GHz, then we can expect up to 2 ns right there. How many clock cycles is needed to send the data. More = more latency. But a more parallel buss typically fixes this issue. (or a stupendously high bitrate) Transferring data between clock domains. This as a very rough rule of thumb will add 1-2 clock cycles of latency of the slower clock. If the clock domains are phase locked and have an integer ratio, one can transfer without this latency. But that though has the downside that all resources of interest are tied together in terms of clock speed... But since AMD has doubled the bit rate of their infinity fabric, then the latency should get smaller. Both due to data packages being able to be sent twice as frequently, and that the latency associated with buffering data over a clock domain boundary is also reduced. (If this latency halved as well is a better question, since if the infinity fabric clock speed is higher then the core clock speed, then we are held back by the core now adding the latency, though it is still an improvement.)
  11. These numbers are hard to argue with. AMD has truly made some radical improvement. Somehow it feels like I haven't scratched the surface of things I could come to think of in regards to the changes AMD has made, and potential improvements among other things, but this post is already gigantic... So AMD splitting the CPU into 3 chips is honestly not a dumb idea. It does increase tooling costs. But at the same time, the IO chip can likely be used on a slew of different skews, so it will practically pay for itself through improved yield alone. (Ie, if a chip has an IO problem, one only throws out a broken IO chip, not a hole CPU.) Samt thing applies to splitting the cores onto two chips. (IIRC Intel used to split their CPU into two symmetrical chips back in the day, for that very reason.) This would give AMD an advantage in terms of manufacturing cost, and thereby give better performance for the price. The next advantage of spitting the CPU into three chips is that the thermal density of the CPU gets lower. Since the three chips have spaces between them, spacing out the heat over a larger area under the IHS, and this should in turn mean that it should be easier to cool. Though, theoretically speaking that is, how much it actually matters in practice is a different question. The effect might be next to negligeabel, or it could be huge. (I haven't tested.) Though, added latency when communicating between the chips will have some downsides, same thing applies to multi socketed systems as well where the effect is typically even more noticeable, not only due to added latency, but also due to yet lower bandwidths. (But there most OSes has a bit better understanding to not just randomly toss threads onto cores. OSes could just implement support for core groups within a CPU... (Though, then the CPU needs to have some way of telling the OS what core belongs to what group. (Most CPUs can give the OS information of what cores shares L2, among other things. So the information is there, the OS just needs to use it.))) Then I do have to wonder, did AMD implement their up to 72 MB L3 ("game cache") as a method of circumnavigating memory bandwidth problems? (The AM4 socket only has 2 DDR4 memory channels after all. With enough cores and clock speeds, this will become a bottleneck.) Okay, adding more cache does work to a degree, but the returns are rapidly diminishing, and not to mention that applications dealing with larger datasets likely won't care about that cache due to only accessing the data for a very short time every few billion or more cycles. (Not a typical application for the everyday user though. So as long as one doesn't work with huge datasets, then AMD has a good offer. (And if one works with huge datasets, then one goes to AMD and buys an EPYC with 8 memory channels instead... (Intel's 3647 platform only has 6 memory channels, unless one needs an in hardware AES encryption accelerator.))) Then we should also take the cost of SRAM into consideration. (72 MB of single access SRAM needs roughly 2.4 billion transistors, not including any needed glue logic. (Dual access SRAM needs roughly 3.6 billion transistors.) Then there is the actual caching system, address translation tables, etc... It is a lot of transistors, though modern CPUs is typically 60+% cache.) But the up to silly amount of L3 ("game cache") is likely going to be advantageous in some applications, though it likely isn't cheap in manufacturing. But if AMD uses it to space out their cores on the chip, they could in theory lower their power density on the chip level as well. Making cooling easier yet again on paper. (But this is likely not going to be all that noticeable...) Though, I am going to stop talking about cache, because I haven't even scratched the surface yet... (Since I could go into among other things, speculative prefetching due to branches in execution and how the efficiency of that effects the effective memory bandwidth of a system under a given workload. (This "efficiency" is on paper not to hard to measure.)) In regards to the chipset fan, I would suspect that a decently large heatsink with sufficient fins would actually be able to keep it cool. Downside is that it likely is going to need a bit more machining work then a simple stick on heatsink and a fan holder. After all, it is just 14 watts, it is a bit on the high side, but nothing that a cheap aluminium extrusion couldn't handle. (Not to mention that a large block of metal would likely have enough thermal mass to handle bursts of activity. The chip doesn't after all consume 14 watts constantly, but rather peak. And even a small block of aluminium has a lot of thermal mass...) Maybe a project @AlexTheGreatish could dive into? Hope this text wall is informative to the one's willing to read through it...
  12. Nystemy

    Our BIGGEST Unboxing Ever!!

    Seeing Linus stroll around poking at all these things is like seeing someone stroll into an alien world. I think it is safe to say that Linus has very little clue what half the stuff is, or what it is useful for. But that will likely change rather "quickly", after all. It doesn't take long at all to get the appreciation of what some of these tools are capable off if used correctly. I also do have to say that Alex is just as ambitious today as before they joined LMG. And thanks for finally getting a tap wrench! It has been agonizing without it.
  13. Nystemy

    How to Not SMASH a PC

    @Loote Yes, generally speaking, one just needs to take case orientation into mind if moving it oneself. (Tower coolers are though in need of a bit more protection, mostly for the motherboard's safety.) If shipping with a carrier, it is though usually a different story, and more care should generally be taken. I personally would go the "extra" mile of removing things like GPUs, and large CPU coolers. (reapplying cooling paste is easier and cheaper then replacing a motherboard...) Though, if the CPU cooler is screwed in from the back, then some hard foam is not a bad idea to surround it with. (Or do oneself a favor and replace the cooler with something that screws in from the front. Or go for a closed loop liquid cooler.)
  14. Nystemy

    How to Not SMASH a PC

    Last time I moved my desktop, I put it in the back of a car, and just drove it to wherever I needed to go.... I had it laying on its side (motherboard side down), and the back pointed towards the front of the car. Since most vibrations in a car is vertical, then this isn't going to effect the components all that much when it lays motherboard side down. And any more rapid de-accelerations will push everything towards the back, something that generally isn't much of a problem. Except if one has a tower cooler or similar. And the only sideways forces the case will be exposed to is when one takes a curve, and since one generally isn't driving around on a race track, then this isn't exposing components to all that much force. (Likely less then 1G.) Unless one gets hit by another car, but then one likely has more important things to worry about... Though, protecting the internal components a bit more might not be a bad idea. I should probably take a bit more time next time I move a PC and put in some more protection. After all, stuffing in a bit of closed cell soft foam isn't that hard...
  15. when James said that most people don't have a 3D printer that they were going to use more normal tools. Like a drill, and then clean up the burrs with a knife, but not even that?! Since 3D printing does cost money, the plastic filament isn't free. So to meet that cost budget, a drill would be easy, since a lot of people have one somewhere.
×