Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

About wanderingfool2

  • Title

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yea, it is just Jarsky mentioend using an UPS to protect against dirty power, but didn't mention a true sine wave (which is why I wanted to clarify that non-true sine waves could actually have more dirty power than the power grid). Also, it isn't a PC we are talking about it is for a server...depending on what it's running it could be a good 5 - 15 minute shutdown process to preserve data (all depends) A server really should be a room where there isn't general traffic going through, and we are talking about a workplace environment where rules and signs can be put into place. Not everything is created equal though, there are still a considerable amount of line interactive units that have lower tolerances to surges. It is why I said my recommended approach is usually using an UPS with true sine; with the caveat to look at the protection rating (as a good quality arrester can handle quite a bit more than an UPS). You are right, coax and phone lines can carry power; and it should be also a point of discussion as wel; but again it is just hooking it up to an arrester. Like I originally said, it all boils down to cost, and the risk reward aspect. An arrester is designed to create a short to ground essentially when the voltage becomes too high (like in a lightning strike), so while it does let some through it is a lot less than a surge suppressor will and if you had an UPS inplace after it, the UPS can absorb the rest (as from what I understand many are suppresors). (As suppressors absorb the hit, but do not release the energy into the ground...they are a lot less effective against lightning strikes which carry too much energy). Again it is all about cost vs risk.
  2. Some UPS's can actually create more dirty power while on battery than some conventional power grids. (I can't remember the exact source anymore, but there was a good graph using I think an oscilloscope of a true sine UPS vs an approximated sine UPS that showed the spikes/frequency...the fake sine was bad). Putting an arrestor before the UPS can be okay; you need to make sure though that it is the only thing plugged in (and that it is one that can handle the load properly). To be honest, as much as UPS's advertise surge suppression, they actually have quite a smaller suppression than a good quality arrestor.
  3. There needs to be the caveat that it should not be cheap surge protectors (too many times have I seen people using cheap ones) and don't overload it (again so many people overload it) To @mattran it will all depend on how much protection (and what level of protection) you need and the risk of strikes occurring. An example being surge protectors, aside from plugging it directly into the wall, they probably provide the least protection (okay...well depending on what type of surge protector you buy...but that is why I don't like using the term surge protector as most people will think of the $20 cheaper protector). if this route is taken, then look at the numbers in terms of what size of spikes it can prevent (a direct strike to the power grid near your building will cause jumps in many surge protectors) The solution I usually go with is with UPS (with a true sine wave generator...as I've come across a few pieces of equipment that don't work with simulated waves). The prices vary, and different options comes with different levels of protection. The important thing is that it can help keep the power more regular and gives you time to shut off the server (or have it automatically turn off)...it also does have some power surge protection (but be warned, many don't actually provide the same level of surge protection if you read the specs). With UPS's most recommend plugging directly into the wall, but I have seen a few that were plugged directly into a surge arrest and operating fine (you risk switching to battery more often I think. Anyways, and example being https://www.apc.com/shop/ca/en/products/APC-Black-Rackmount-SurgeArrest-9-Outlet-120V/P-NET9RMBLK and https://www.apc.com/shop/ca/en/products/APC-Smart-UPS-3000VA-LCD-RM-2U-120V-with-Network-Card/P-SMT3000RM2UNC Look at the surge that it can handle. (The arrest is a lot better). The other option is what @tempestcatto said. Really though it is weighing the risks of not having it vs the cost. If it is protecting half a million in equipment then think about spending more money...but if it protecting like $10k in equipment, consider doign things like UPS's (just my opinion, and everyones use case will differ and will depend on how the business is setup)
  4. I do think it is about time that YouTube did something like this. Just this change will hopefully cut down on a lot of the false strikes (that isn't to say it is perfect system), but it does take away a lot of the financial motivation of monetizing tons of channels with baseless claims. I get the concept, and trust me I do hate the guilty until you have made enough noise about your innocence that is the current setup (mostly I think due to copyright laws and the early days of the billion dollar lawsuits)...the only thing is a system where you have to prove a copyright before action is taken creates a swing in the other direction as well (piracy of music becomes a lot easier to profit off of). Really there will never be a perfect system though There will always be the grey areas unfortunately. The way I look at copyright (and it varies by different types of works), is that it should be partially determined by weight of the work, content, and novel concept. 1) Weight - Was the use of a work an integral part of the work (or did it contribute to the work in any way). [After all, a streamer playing music in the background is quite different than having it the predominant noise while streaming] This would give small audio clips, where it is more of an incidental noise a lot less value. With that said, if your kid dances to music it puts a lot more weight on the music as it contributes a lot more to the video. Wight also being, does it detract from the original copyright work (ie...how likely is it that it causes financial harm) 2) Content - Similar to weight, but it should be how much of a copyrighted clip is actually used and how similar it was to it 3) Novel concept - If you tell 10 people to draw trees, and if they all draw pretty similar things then it shouldn't be copyrighted...just like the recent million dollar lawsuits against song writers, they are being sued for cords that have existed for hundreds of years...to me it isn't novel (but the overarching sound can be...if that makes sense). So in cases of Mickey Mouse, Disney would own any likeness to it (yes it is trademarked...but to be honest, copyright and trademarks are very similar)
  5. If you do attempt using sslstream again, just post the code that errors and I am sure that someone will be able to help. Another alternative may be to also try openssl (I think there are c# wrappers for it)
  6. What sort of issues are you having in terms of setting up SSL over TCP? (I am assuming when you said SSL you are referring actually implementing TLS 1.2/1.3?) If you understand how to do the other things manually, then I am a bit confused where you may have a sticking point in the SSL portion. As others have said, it is generally a bad idea. There is a lot that can go wrong when implementing your own encryption....realistically though it may depend on what you are intending to use the program for. If it is a program that is going to be openly distributed, and the security of the TCP is important then no, do not implement your own. For something like, sharing stuff between friends...maybe. (Although in that case you would have an option of also using a predefined key ontop of that)..then again if you use a non-standard encryption (and maybe throw in your own custom encryption code) you may be safe just out of obscurity (assumes that there will only ever be a few people using this...given any encryption you write would be weaker than the current standards...but would it be worth it for the attacker to try figuring out what you did for a target so small )...but the answer is still no; try getting ssl working, and if you are stuck just ask people here to help find issues with your code (ie why it isn't working)
  7. Except having a { } on every single line if also add in an extra 2 lines (3 lines if you have the opening bracket on it's own line). Which to me harms the readability a lot more than the potential of a miss read. For the second point, my point still stands in the concept (again I skimmed the code)....Although I did read the code without skimming and did see the out of scope variables when I skimmed I had thought they had been reused. (My argument wasn't for declaring at the beginning, but rather that I thought there wasn't really much other choice in scoping due to skimming it)
  8. I am a bit at odds at both comments. It isn't always "good practice" to have braces on single line statements. It all depends on what conventions you follow, and where you choose to exclude the braces. In this case, I think leaving out the braces brings more clarity to the code (at least the one I skimmed through). As for the declaring the variables at the beginning...I think he did it the right way personally...mainly that he wouldn't have had much choice of where to put it.
  9. And this is why I hate the concept of the "cloud" and moving things to the cloud...it gives third parties so much extra power.
  10. Those are two examples, to show that a single core could have benefits to everyday users. While you may guess that the audience is heavily towards enthusiasts, any enthusiast would know to look at more than just the end score. Only day to day consumes (ie those who don't know what a 2 core vs 4 core is) I think would use the overall score number (and I do think that putting a heavier focus on single core is better). Again, lets be honest, enthusiasts or even semi-techy people would scroll down a bit and find the numbers that are more practical for them
  11. It all depends on workload really. The way I look at it, I can see the merits of putting more weight on single core performance, because in my opinion the larger majority (and not talking about tech people here), will likely not require as many threads (and single core performance is likely to make things feel snappier). An example being people who may utilize excel...it typically is stuck to one core....or when using a web-browser, while it may utilize more threads it is very unlikely to utilize all of the threads (and in some real world cases I found it more pinning a single core with the others doing smaller tasks). So I am just saying I can see merit in the fact of weighting single core performance heavily. (That isn't to say I would prefer a faster 2 core over 4 core vs 8 core...but I think with most people here being more tech oriented there is a tendency to forget that the non-techy people I think outnumber us)
  12. It all depends on what you consider to be "hardware speed", and what your usecase is. Most day to day tasks (which accounts for most general consumers and ones that may be looking for comparisons I would argue don't require much more than 4 cores)...actually "hardware speed" is probably the wrong term to use that case...as a single core performance in my opinion is the measurement of hardware speed (vs parallelism).
  13. Speaking as someone who has to deal with the transition to newer servers, it likely is not the case that they can't be bothered to train their IT but rather just the overall expense of moving. An example being, one place I worked at it was going to cost upwards of 100,000 to do the upgrades, and that is for a company where it was small but did require different servers and workstations. That was before factoring in that not all the software would transition correctly to the new systems...actually truthfully, MS and their CAL's are so overpriced but they effectively hold a monopoly so you have to pay. I actually do find it a bit ridiculous that the server software isn't supported longer. Consider that 2012 R2 was released in 2013 (that is 7 years...but personally I wouldn't do an upgrade until at least a year after a release to ensure stability...so that is 6 years of having it working...but given the software changes/validation can take a year, that is only 5 years) You pay for the extra years of security patches in order to buy time (literal here) to transition to something newer.
  14. Fast is going to be very very subjective here. (And will depends on what you really consider to be "fast"). To quickly answer your disk speed question about disk speed...no, it is not the only important factor. I think the most important factor is how exactly your data is stored/organized. file2 is what would be focused on (as it is the large one). Are item's numeric? Is file2 sorted? If not, can it be sorted based on item? The reason is sorted data is a lot easier to combine than none sorted. (1, 4, 5, 8, 10 is a lot easier to look up values than 1, 4 , 2, 10, 8...in the first example you can do a binary search so you don't need to parse the entire string). Truthfully though, if you really wanted to just load up a mysql onto a computer and create two tables. Load them with the data from file1 and file2 and just run a join and see how long it takes to see if it is a reasonable time.
  15. So personally I wouldn't be messing around with Jumbo frames and static routes yet, if you are getting that sort of speed when just purely connected to the devices. How I would approach this problem. Set the things back to default (jumbo frames and tweaks to the settings can greatly boost speeds...but with a 10gbps connection, you should be getting at least 100MB/s without changes...so why complicate things by adding in more variables at the moment). Connect just the 10gb link to your computer and the NAS...no other connections to the outside world. Try the transfer, by the way you are testing using like a 5GB file or something like that right? (Smaller files tend to slow transfer speeds). Bad speeds still, plug it into your gigbit lan on both computer and NAS...do you get better speeds? If your speeds do improve, try plugging the 10gb link into a switch and from the switch into the NAS (I am assuming you didn't buy a cat6 crossover cable...wondering if that could be an issue) [If having the link run through the switch, which will limit you to gig bit speed, does work try making a crossover cable or try figuring out why MDI/MDI-X is messing up] Still bad connection, plug in your laptop the same way (direct)...better speeds? (Keep transferring the same file for the test for consistency) If it is still bad, try connecting your onboard lan to the 10gb on NAS Anyways, let me know