Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

alpha754293

Member
  • Content Count

    171
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About alpha754293

  • Title
    Member

Profile Information

  • Occupation
    I make Powerpoints.

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yes please, if you don't mind. I think that you only ran openssl speed sha256 originally, but now I am trying to collect data for the entire run so that I can compile it into a little write up, so if you don't mind running it again, that would be greatly appreciated. Thank you.
  2. Note: If you have overclocked your processor, if you can please let me know either in the name of the file and/or a comment here and/or a message to me. As long as that's communicated to me somehow, I will make a note of it as I am collecting and collating all of the data. I have found that apparently no one seems to run the openssl benchmark (which isn't terribly difficult to run), so I am pulling the data together myself with the help of the community. Thank you.
  3. The TL;DR: I'm looking for volunteers to run the following command using any Linux distribution, on ANY processor: openssl speed 2>&1 | tee OS_kernel_processor_motherboard_memcap_memspeed_openssl_results.txt Please fill in for your fields. I'm looking for some data to compile into a spreadsheet that then I will try and see if I can perform some statistical analysis against to see if I can come to some sort of conclusions about processors and hashing performance as apparently, hashing it used as a part of other systems and/or software (like for storage such as ZFS). If you don't mind uploading your data here so that I can collect it and collate all of the community supplied data, that would be greatly appreciated. If you don't have a Linux distro running and would still like to contribute, you can pretty much google and download ANY Linux distro that provides a "Live CD" on a USB key and you can run it with that. For those that may not be familiar with Linux, please note that by default, when you open up a terminal, it will put you results into your home directory. If you want to put it on your Desktop so that it would make it easier for you to get to (to upload), please run this: cd Desktop openssl speed 2>&1 | tee OS_kernel_processor_motherboard_memcap_memspeed_openssl_results.txt And that will put the results on your Desktop. If you have any questions, please do not hesitate to ask. For those that want the in-depth, technical review/discussion, you can read the thread over at Level1Tech forums here: https://forum.level1techs.com/t/whats-the-fastest-processor-for-single-threaded-single-process-of-running-sha256sum/157464 There's a LOT of more in-depth technical information there as well as the background and technical reasoning for why I am asking/calling for community supplied data. Thank you, everybody.
  4. The Shanghai Supercomputer was used in China when they built the "temporary Coronavirus 'field' hospital" -- they used the supercomputer in order to optimise the design of the ventilation systems in the room, and that's inside the hospital. They then also use the same supercomputer to look at the effects of the ventilation system external to the hospital (i.e. you have to vent the air somewhere, so you don't want to be spitting out contaiminated air back into the environment) and/or also modelling the nearby area that would be at risk for "pollution". So, that's a recent use case of what supercomputers are still used for.
  5. Yes, I want it to be sha256 due to the propensity of collisions for md5 and sha128, which apparently, the lowest official bit size for SHA-1 is sha160. Apparently, people can already run: openssl speed sha256 to benchmark their system if they're running Linux and/or Cygwin (with openssl installed) and/or MacOS. The test is between systems. I'm not running distributed parallel ZFS and/nor ZFS over Lustre.
  6. Yeah, I don't really program. Yes, it needs to be SHA256. It's used to make sure that when I am moving large files between my systems that the transfers are getting completed properly and it's used as part of the verification process in addition to the file size. Yes, there are multiple files, but that's not the purpose of this question here. I already have a parallel script for processing multiple files. This question here is to zero in on how to process a single, large file (again my biggest file so far is around 6.5 TB in a single file), and to be able to do that as fast as possible. I haven't been able to find benchmarks that actually tests and compares SHA256 performance across a variety of different processors, so I am wondering there might be other people who might know of a way to get this kind of data. Thanks.
  7. I'm not exactly sure. I think that the largest file that I am running the sha256sum on is somewhere around like 6.5 TB, so I don't know if the SHA256 algorithm has been parallelized so that it will be able to take a single file, break it up into n pieces, and then sum it up in parallel. I mean, if there's a tool that can do that in Linux, I'd definitely be interested in trying it out because otherwise, my current system can only read the file at around 300 MB/s, which means that it takes a really long time to process a 6.5 TB file.
  8. Is there benchmarking data available that you can share that would lead you to that conclusion so that I can take a look at that as well? Thank you.
  9. What's the fastest processor for single threaded/single process of running sha256sum?
  10. So....apparently, trying to create 2.9 million 100 byte files is too slow/just takes too damn long (only 533,000-ish files have been created since two-and-a-half days ago), but I think that you get the picture. (I mean, I can still run it with what I've got now as far as number of files and size distribution...)
  11. I tested this as well, here is the methodology that I used: To create 1 GB worth of files, I calculated that for sizes ranging from 100 bytes to 100000 bytes, keeping the number of files consistent across the sizes, I would need 9665 files of each size in order to generate 1073781500 bytes worth of data. (NB: 1 GiB = 1073741824 bytes) so I'm over that by 39676 bytes. Then I created the files for the different sizes using these commands: time -p for i in `seq -w 1 9665`; do dd if=/dev/random of=100Bfile$i bs=100 count=1; done time -p for i in `seq -w 1 9665`; do dd if=/dev/random of=1000Bfile$i bs=1000 count=1; done time -p for i in `seq -w 1 9665`; do dd if=/dev/random of=10000Bfile$i bs=10000 count=1; done time -p for i in `seq -w 1 9665`; do dd if=/dev/random of=100000Bfile$i bs=100000 count=1; done The times to create the files, respectively are: 261.42 s 258.22 s 366.94 s 2546.93 s (These files were created with an HP Z420 Workstation, Intel Xeon E5-2690 (v1, 8-core, 2.9 GHz base, max all core turbo 3.3 GHz, HTT disabled, HGST 3 TB 7200 rpm SATA 6 Gbps HDD connected to the onboard SATA ports, e.g. no HBA, 8x 16 GB (128 GB total) Samsung DDR3-1600 ECC Reg. RAM, OS drive is an Samsung 860 EVO 1 TB SATA 6 Gbps SSD, using Cygwin64, in Windows 7 Pro x64.) The time it took to copy 38660 files from the workstation* to my NAS unit (Qnap TS-832X with eight HGST 6 TB 7200 rpm SATA 6 Gbps drives in RAID5) using the command: time -p rsync -vr copytest /cygdrive/x/copytest is 1443.66 s. The time it took to store the files into a zip archive using 7-zip with no compression using the command: time -p 7z a -tzip -mm=Copy copytest.zip copytest is 7.80 s. The time it took to rsync the zip file over to the NAS using the command: time -p rsync -v copytest.zip /cygdrive/x/copytest is 19.86 s. The time it took to unpack the zip file directly on the NAS, using NAS' file manager -> extract command is 306 s. (See Powerpoint Presentation for the proof/evidence of that.) *I originally tested it with the workstation in order to "simulate" the single drive NAS that the OP had to the new, 2-drive NAS that he has. In testing the NAS-to-NAS transfer, both of my two NAS units (both QNAP TS-832X) have eight drives each, one has eight 6 TB drives, and the other has eight 10 TB drives, all HGST, all 7200 rpm, SATA 6 Gbps HDDs and they can see each other over CIFS. The time it took to run rsync from NAS to NAS using the command: time -p rsync -rv copytest /share/share/copytest is 614.86 s. Total time: 333.66 s vs. 614.86 s for a NAS-to-NAS direct copy. I've dumped the screenshots of each stage/step into a Powerpoint Presentation. I'm repeating the exercise whereby I am varying the number of files, but keeping the number of files * size of files to be equal between the different file sizes, e.g. 1 GiB / 4. This means that I will need 2685000 100 B files, 268500 1000 B files, 26850 10000 B files, and 2685 100000 B files, for a total of 1074000000 B which means that I am going to be over 1 GiB (technically) by 258176 B. Generating a grand total of 2983035 files takes a while, even with an 8-core processor, so I'll finish the rest of the test probably tomorrow. Presentation1.pptx
  12. Actually, since the OP is using a RPi4, he might actually be CPU limited and less so HDD limited. But that's why I suggested that he test it. If his CPU can keep up with it, then it's a toss up between whether he's going to be line limited or HDD limited. A lot of current HDDs has STRs > GbE speeds, even with just a single drive. (cf. https://www.tomshardware.com/reviews/wd-red-10tb-8tb-nas-hdd,5277-2.html) Again, if you have small files that are dispersed in between the large files, transferring them over the TCP/IP is a vastly slower, high latency process whereby you can't EVER come close to the line rate that the interface is capable of. By storing the files into an archive, to the network interface, it thinks that you're transfer the single large file which can contain all of the small files and therefore; your average transfer rate goes up significantly. Again, you're all welcome to test it out for yourselves as it is quite evident that most of the people who are commenting/replying doesn't appear to have much in the way of experience with this, tape or no tape. Create a GB's worth of files ranging in size from 0.1 kB to 100 kB. Under normal/"ideal" circumstances (which you CAN actually achieve), and using simple math, the amount of wall clock time that it should take for you to transfer 1 GB file over a GbE interface is 10 seconds. Once you've created a GB worth of files ranging in size from 0.1 kB to 100 kB, now try copying them. If the entire process takes you > 10 seconds, then your effect transfer rate over the same GbE interface will start to plummet, e.g. if it takes you 20 seconds to copy those files, you've just gone from an effective transfer rate of 100 MB/s down to 50 MB/s. In fact, the even easier way to test this would be to use iPerf and you can see what your effective transfer rate is over a range of len * num values. BTW, you DO realise that the OP tells you a little bit about the distribution of the files, right? It's always amazing to me what people assume about the probability density function as a function of size of files that someone might have when there is literally no way that you can assume said PDF(filesize). Do it. Don't do it. IDGAF.
  13. Thank you. So, maybe that's where I misunderstood what Steve at Gamer's Nexus was talking about then, in that PL1, PL2, and tau ONLY is applicable for all core boosts, and is irrelevant for single core boosts. Would that be correct based on everything that's written here? I thought that tau was also related to single core boost as well, but if it is power limited and not thermally limited (as long as you haven't hit Tjmax yet), then this helps me in clarifying that in what I am looking for, tau, PL1, and PL2 won't be applicable then. Thank you for helping me and correcting my (mis)understanding of this.
×