Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

wanderingfool2

Member
  • Content Count

    730
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    wanderingfool2 got a reaction from BotDamian in Proof Of Stake Blockchain (Security and 51% stake attack)   
    Except you have things like Iran which faces blackouts (to the point they had to ban mining).
     
    The other problem is that other systems end up having to also buy energy (sometimes for higher rates), during peak times and mining during that time just means you have to buy more energy and more often (because that "wasted" energy was being used as a buffer).  It also doesn't address currently what is happening like in California where their hydro power is at risk due to drought (so more energy usage just exasperates the problem).
     
    Ultimately, Proof Of Stake should be more efficient than POW; but POW in terms of bitcoin is just stupid (not all crypto is stupid with POW)...just bitcoin there is a fixed amount of transactions that is ever requiring more computing power...thus it's terribly inefficient for it's task
  2. Agree
    wanderingfool2 got a reaction from Nvidix in Amazon's mmo New World is bricking 3090 gpus   
    More like an overlooked thing by the hardware that isn't common...but uncommon doesn't mean it's bad.  An example being Intel's bug...divide 4,195,835 by 3,145,727 (on an old pentium) and you get the wrong result.  Similar things could be happening here, where it's hitting an obsure case that's causing issues.
     
    Ultimately though, software should not be blamed for killing hardware like this...hardware should have the adequate safeguards in place to prevent bricking.  If it's because it's being stressed so much that the system is breaking, then I would be willing to guess that in a year or two those cards might have fried themselves anyways...this just is accelerating it.  This is more of a hardware design failure
  3. Agree
    wanderingfool2 got a reaction from Castdeath97 in Amazon's mmo New World is bricking 3090 gpus   
    More like an overlooked thing by the hardware that isn't common...but uncommon doesn't mean it's bad.  An example being Intel's bug...divide 4,195,835 by 3,145,727 (on an old pentium) and you get the wrong result.  Similar things could be happening here, where it's hitting an obsure case that's causing issues.
     
    Ultimately though, software should not be blamed for killing hardware like this...hardware should have the adequate safeguards in place to prevent bricking.  If it's because it's being stressed so much that the system is breaking, then I would be willing to guess that in a year or two those cards might have fried themselves anyways...this just is accelerating it.  This is more of a hardware design failure
  4. Agree
    wanderingfool2 got a reaction from Mel0nMan in Amazon's mmo New World is bricking 3090 gpus   
    More like an overlooked thing by the hardware that isn't common...but uncommon doesn't mean it's bad.  An example being Intel's bug...divide 4,195,835 by 3,145,727 (on an old pentium) and you get the wrong result.  Similar things could be happening here, where it's hitting an obsure case that's causing issues.
     
    Ultimately though, software should not be blamed for killing hardware like this...hardware should have the adequate safeguards in place to prevent bricking.  If it's because it's being stressed so much that the system is breaking, then I would be willing to guess that in a year or two those cards might have fried themselves anyways...this just is accelerating it.  This is more of a hardware design failure
  5. Like
    wanderingfool2 reacted to RejZoR in Pegasus spyware used by governments targeted iOS by multiple weaponized zero-days recently   
    Well, so is of physical guns and yet they all do it...
  6. Like
    wanderingfool2 reacted to Eigenvektor in CPU vs GPU instruction set difference   
    I do 🙂 I can also attach the whole project if needed, but here's the relevant source code:
    External dependencies are Apache Commons Math3 (for calculating the mean) and JFreeChart for plotting the chart.
     
    I will readily admit that this is a very naive brute force implementation. I'm sure there's any number of optimizations that could be made (first of all, no copying data). Initially I just wanted to see how big the difference between stream and parallelStream would be, but while playing with array sizes and different filters I decided to add the chart to see how it would behave.
     
    Yeah, also those 1.5 seconds only happen if you copy the 100M source list into a 100M result set, basically duplicating all of the data. I'm using a naive stream implementation that creates a new array for the results, so I think a lot of that speed is down to memory.
     
    If you just iterate over the 100M array and produce little to no results the speed it a lot higher. An implementation that works without copying would likely be a lot faster. It would also be interesting to see the difference between a work stealing thread pool vs. splitting the list into equal chunks (e.g. each thread only iterates a certain index range). Not entirely certain how parallelStream does it, would have to look it up.
     
    Note that I limited my CPU to run at a constant 2.2 GHz using the powersave governor (Linux). If I remove that limitation, results actually get skewed in favor of single threaded (higher clocks, I suppose)
     
  7. Informative
    wanderingfool2 got a reaction from shadow_ray in CPU vs GPU instruction set difference   
    haha, by chance do you still have your source?  [For the record, I believe your results and mostly agree with your conclusion]
     
    I do agree, the overhead of merging is likely the reason.  Filtering is more of an O(n) operation, while sorting is O(nlogn), which given the results need to be done within 3 seconds is beneficial since filtering will take less time on larger datasets...but the real benefit coming from the fact that sorting in MT can effectively run the same operations as ST due to the algorithm.
     
    I suspect with careful manipulation you could do a better result with filtering...but then it gets back to is the time invested in it worth it..filtering on a GPU could still potentially be quicker, as it has higher speed RAM and is built for quick comparisons.
     
    While I know it won't match @BotDamian's dataset (and if run on a Rasp Pi 4 with a slower processor and RAM), the fact you did 100M in 1.5s as a max does show that he likely won't have to worry as much.
  8. Agree
    wanderingfool2 reacted to Eigenvektor in CPU vs GPU instruction set difference   
    I wrote a small application in Kotlin that generates 100M objects (containing an Int) and then uses stream and parallelStream to filter them.
    What's interesting to see is, as the number of items that remain in the list after the operation completes increases, at some point the single threaded implementation actually starts to be faster than the multi-threaded implementation. My guess would be that merging the results starts to become a significant overhead.
     
    Takeaway: the overhead of merging results can easily eat up all the gains you get from doing things in parallel. So be sure to benchmark with actual data and don't assume that things are faster just because you're running on multiple threads.
  9. Agree
    wanderingfool2 reacted to Eigenvektor in CPU vs GPU instruction set difference   
    I think you should also take into account that, compared to a CPU-only solution, you'll most likely introduce a fair amount of additional complexity into your codebase. Unless performance in that particular part of your application is absolutely mission critical as mentioned above, it may not be worth it from that perspective alone.
  10. Agree
    wanderingfool2 got a reaction from Eigenvektor in CPU vs GPU instruction set difference   
    It really depends likely on the data.  Honestly though, it's better to have a real use case first and then figuring out if it makes sense to even use a CPU/GPU.
     
    You could optimize sorting to death, but if it only takes 0.1 seconds to sort and only gets called a few times in an application that takes 5 minutes to run it wouldn't make sense.  If sorting is what is the bottleneck sure, then it might make sense...but the best thing would be to run some tests on the hardware you expect it to run on (as all hardware is different)
  11. Agree
    wanderingfool2 got a reaction from tikker in CPU vs GPU instruction set difference   
    It really depends likely on the data.  Honestly though, it's better to have a real use case first and then figuring out if it makes sense to even use a CPU/GPU.
     
    You could optimize sorting to death, but if it only takes 0.1 seconds to sort and only gets called a few times in an application that takes 5 minutes to run it wouldn't make sense.  If sorting is what is the bottleneck sure, then it might make sense...but the best thing would be to run some tests on the hardware you expect it to run on (as all hardware is different)
  12. Like
    wanderingfool2 got a reaction from ZFSinmylungs in The number of hard drives supported in a server?   
    It depends what you mean by limit really.
     
    Yes, power consumption can be a large factor (in the sense that IronWolf drives require 1.8 amps on the 12v rail...so you need to either start up drives separately...slower boots, or have a lot beefier power supply/supply the power in a different way).  The idle on IronWolf's is at about 5 watts as well for 12tb drives, so after it has started it doesn't draw too much power (like 0.12 kWh a day per drive).
     
    A big thing can be the vibration and heat though if you have a lot of drives.  I've seen people using consumer drives in large array NAS setups, and I've seen them fail.  You would need enough spacing between them so you can have airflow.
     
    If you are talking about theory though, there is a space limit.  If you wanted to have it just appear as one volume (instead of hundreds of drives), the limit is 8 PB...if you are using an older edition of Windows 10 (pre 1709), the limit was 256 TB / volume.  If you ignore linking volumes to a directory, then max volumes would realistically be 26...(208PB of storage).  With that said, you would be breaking the bank by doing that at over 4160 drives (assuming 50TB drives) 20.8 kWh.
     
    A note though, to achieve 8PB of store a cluster size of 2MB needs to be chosen...while not realistic, in theory you could fill up the space using 4.3 billion files...like 10,000 text files (of 2kB each) one would expect would consume about 20 MB, but in reality in a cluster size of 2MB would consume 20 GB.  (I could be wrong, someone correct me if I'm wrong...since cluster size I believe defines the smallest file size possible).  It is why choosing the correct cluster size can be important...if you intend to store a lot of little files smaller cluster sizes is more space efficient, but it limits how large the drive could be.  The default actually is 16TB (4KB cluster), so 10k files would be 40MB.
     
    Real world example, my programming/cache folder totals ~1 million files (rounding for simpler math), avg file size of 20KiB [20GiB].  If I created a volume that could hold 128TB, those files would consume at least 32 GB.  A volume that could be increased to 8PB, it would be 2TB, 1PB it would be 256GB.
     
    So yes there is a limit for the upper bounds.  I am just saying all of this, because there will be someone out there that would create a raid 0 type of array thinking they will just add more space when they need to (and over-provision by selecting the largest cluster size that will allow them to expand the most) without realizing you are sacrificing things by doing so.
  13. Agree
    wanderingfool2 reacted to Eigenvektor in CPU vs GPU instruction set difference   
    As a high level answer, a CPU is general purpose. Anything you can think of, it can do. That complexity comes at a price. A CPU core is huge compared to a GPU "core" (we now have up to 64, while GPUs have several thousand)
     
    That's because a GPU is far more specialized. A GPU "core" is not comparable to a CPU core in any way. For example a tensor core can do matrix multiplications. That's it. It can't do anything else. However, you have hundreds or even thousands of them. So you can do thousands of these multiplications each clock cycle. Which is great for 3D graphics where you have to do a ton of these operations each frame.
     
    Likewise GPUs are good at stuff like AI or even physics, because you have to do a lot of mathematical operations (like matrix multiplications) that a GPU is good at.
     
    This is also sometimes referred to as SIMD – Simple Instruction Multiple Data. The idea is that you have very few operations you can do, but you can do hundreds or thousands of them in parallel, because (for 3D graphics) you have to do the same operations over and over again (e.g. to determine the color of each pixel).
  14. Like
    wanderingfool2 got a reaction from Lurick in Google Sued by 36 states over Play Store fees   
    Not sure how I feel about this.  I do agree that some of the aspects that Google does should not be allowed, but at the same time there is quite a difference between lets say Apple vs Android in this department.
     
    The filing itself says that 2017, there was 90% of Android Apps downloaded from Play Store...while it is a majority, it does at least show that there are other alternatives.  A major reason I use Play Store over Samsung store is because Samsung's store is annoying to use (in my opinion).  I have also side loaded a few apps (when I needed older versions).
     
    The lawsuit makes it as though the warnings and cumbersome nature of allowing side-loading is a bad thing (for me, the average user will go through the prompts and realize what they are doing might not be the smartest thing, which I agree with).  I actually find it funny how they say that millions of PC users safely download programs on their computer ever day...given the amount of calls I get from users, they are not doing it in a safe manor.
     
    Actually the numbers they showed seems to imply it's 8x more likely to get a PHA from outside of the play store, while also arguing that it's only power users and users who know what they are doing who download outside of the playstore...so imagine if it was everyday people who did.
     
    With that  said, there are merits to some of the arguments, and I do believe that Google shouldn't be allowed to make all the decisions they currently are...but at the same time they are asking for more than what Apple is up against.
  15. Like
    wanderingfool2 got a reaction from leadeater in iOS Wi-Fi vulnerability leaves devices without Wi-Fi functionality   
    Yea clarification would help...hoping someone has an iPhone here and can check (a spare iPhone).
     
    I think the confusion is around the fact that a few weeks ago there was a similar thing where you had to join a network, but then there was a tweet regarding hosting a public wifi with a different string.  Specifically this one
     
    But in the thread there seems to be discussion that it's a bit finicky (not all iPhones are susceptible).
  16. Agree
    wanderingfool2 reacted to leadeater in iOS Wi-Fi vulnerability leaves devices without Wi-Fi functionality   
    Hmm would be good to get some clarification around that, hope it's actually trying to connect that causes it and not simply being in range.
  17. Agree
    wanderingfool2 got a reaction from grg994 in iOS Wi-Fi vulnerability leaves devices without Wi-Fi functionality   
    Well based on the string that triggers it, I wouldn't be surprised if somewhere along the lines the SSID name is put through a formater...like if it's trying to store it locally like ID1:MyWifiName...I could see the code for something like that being
     
    snprintf(buffOutput, sizeof(buffOutput), "ID%d:%s", SSID, id);
     
    but then someone coding it really wrong like
    stringStreamVar << "ID%d" << SSID; //In like a get ID format or something like that
    snprintf(buffOutput, sizeof(buffOutput), stringStreamVar.str(), id);
     
    I really wouldn't be surprised if something like this is happening, which would be why a factory reset might be in order if it corrupts the wifi data file.
    While it is a large mistake, and one that shouldn't happen...not sure it would be a full novice mistake...but likely a mistake in not remembering verify where inputs are coming from.
     
    edit* Reminds me of https://xkcd.com/327/
  18. Agree
    wanderingfool2 got a reaction from wkdpaul in License to Run. $4000 IOT Treadmill now charging a subscripion to just run.   
    Yes they had to do something, and their something is utterly stupid...it doesn't even fully address the safety issue.  As a note as well, their response of making a safety feature a service is worth complaining about.  As a note as well, their initial response to all this, instead of recalling, was to pretty much say it is fine and was being blown out of proportion...they only really recalled it because it seems as though they were going to be forced to recall it otherwise.
     
    It's like the NZXT recall, and subsequent solution of the nylon bolt...yes using a nylon bolt fixed part of the issue but the fundamental flaw still existed.  It's not right to sit by and accept half baked solutions to problems regarding safety.
     
    You need to consider as well, getting another treadmill for a decent amount of people will mean having to do research into different brands to treadmills, then it's about dealing with returning the treadmill.  It's not something someone can do by themselves, so it means scheduling Pelton to come and pick up the item (if they allow such a thing).  A note, it's 455 lbs.  Then you would have to buy your new treadmill and likely hire people to install it.  So overall, you could be out a decent amount of money getting another treadmill.
     
    Look at the 737 Max as an example, the safety feature was a paid add-on.
     
    Another thing to note, the safety issue is that they don't have a physical barrier at the back (so this is only a half solution).  Their solution should have been a retrofitted guard to protect users from being sucked beneath the treadmill
  19. Like
    wanderingfool2 got a reaction from king swag in YouTube making all unlisted videos uploaded before 2017 private   
    An unlisted video on an unmaintained channel...yes that must be getting so many views that they are worried about bandwidth /s.  In all seriousness though, the cost of bandwidth from unlisted videos that are unmaintained will likely be so minuscule as the cost of even storing the videos....I bet a lot of the videos are only a few views
  20. Informative
    wanderingfool2 got a reaction from RageTester in Confused about suitable media consumption hardware   
    I honestly don't know.  Everything is your mileage may vary with this.  If you live in an area where it's typically bright, then aside from commercial equipment it's likely not going to ever work.  The projector I use, for reference, is a LS510U 5000 lumens and outdoors it doesn't perform that well (in my opinion).  It's still usable, don't get me wrong, but yes you would need to make sure the wall is fully covered by a shadow.  Any sunlight, even reflection of light, will likely wash out the image.   Effectively you lose contrast in brighter environments until it reaches a point that you can no longer see it.
     
    With that said, my projector with the lights on I can currently see the image no problem (but it's just so much better with the lights off).
     
    Actually a note as well, it's a short throw projector so having a flat surface can be important as well.  If it's ripply then you can get weird distortions (not everyone cares about it).  It's also 20 pounds and measures nearly 1.5feet by 1.5 feet by 3/4 feet...movable yes, but still a bit weird carrying around sometimes
     
    For blackout blinds, you could always see as well.  Maybe your room will be dark enough...but it's not like it's necessary (just helps improve things if you use it on a bright day).  Before I purchased my blackout blinds, I literally just hung up a quilt over the window and it worked really well.
  21. Like
    wanderingfool2 got a reaction from RageTester in Confused about suitable media consumption hardware   
    Oh, forgot to mention...black out blinds make a huge difference (I bought cheap ones on amazon that was big enough to cover my windows, and just put in hooks around the windows so I could quickly hang them up when I wanted to use the projector in theatre mode).
     
    While not for everyone, I bought a cheap refurb computer (it came to about $150 and came with Windows)...it wasn't great, but swapping in a SSD into it, and I was able to easily play 4k files, emulation of older systems, and just used a wireless mouse and keyboard.  That way I could just install the apps I wanted, and had the flexibility of a computer.  (MS teams/zoom calls look pretty epic on a 10 foot screen btw)
  22. Informative
    wanderingfool2 got a reaction from RageTester in Confused about suitable media consumption hardware   
    Well it would depend...honestly I think the best way would be if you got the projector and aren't happy with how it's playing the media files then just buy an android tv.
     
    As a question, is this your first projector you are purchasing?  A note few notes if it is your first time
    1) Projectors do let out a decent amount of noise when running. (It produces 37 db)
    2) This particular projector sits 15 inches away from the screen (when doing 100 inch screen) [the projector is also 15 inches thick...so the back of the projector will be 30 inches away]...so keep that in mind as you might need to keep it more away from the wall.
    3) Measure your room.  My 120" screen just barely fits in the intended room
    4) Lighting is a killer.  4000 lumens will still let you use it in ok light conditions, but the difference between lights on and off is night and day.  (Direct sunlight will be a killer for image quality)
    5) You will need a sound setup
    6) In this case, it has a focus lever...but that means you need to line up the projector to be perpendicular to the screen (and have to set the height of the projector to it projects to the area you want)
  23. Agree
    wanderingfool2 got a reaction from Jet_ski in The WAN Show incorrect about too many things Tesla   
    Important to note - I'm not arguing radar is unreliable, I'm arguing it's less reliable than a good vision system.  In absence of a vision system, radar is a good choice.
     
    I'm not saying we'll never solve it...but trying to abandon the technology could be a good move as a whole.  Sure, on an unlimited budget team, you could afford to have a separate team to work on a radar and vision system, but then you also are sacrificing CPU cycles in building a model that can detect situations where it needs to rely on radar over vision.  As a note as well, the first paper is actually more about vision learning.
     
    If the vision system is unable to tell if you are moving towards an accident, then it's the issue in how do you know if the vision system is not detecting it or if radar is having another false positive.  That's where I am saying the solution is hard; it's mixing two sensor data that is conflicting (i.e. solving that problem isn't a trivial solution, it likely is hard if not harder than solving the vision system).
     
    We do know the short comings of radar, but radar inherently gives you a lot less of a detailed picture in terms of objects.  I mean here, look at this youtube clip (regarding the next generation of radar that at one time was rumored to be used in upcoming Tesla's)
     
    At the 2:38 mark, you can see just how messy the inbound data can be.  This is a sensor that is suppose to be more cutting edge at the moment.
     
    My response in general to yours was about how it's not as simple in regards to building a robust system that can resolve the issues existing between conflicting radar and vision data.  I'd much rather have more of the tensor cores spent on better vision accuracy than trying to work out which sensor is correct.
  24. Like
    wanderingfool2 got a reaction from Spotty in Amazon in the UK, Wasting 124,000 items a week (tech/consumer waste)   
    No.  Just no.  It doesn't matter if you do QC on it, something such as masks should never be resold/donated if they have been returned by someone.  People can easily reseal things to make it look as though it wasn't touched.
     
    Amazon isn't unique in this.  There are other warehouse/distribution partners out there that have the exact same policies.  My cousin for example works for one of them, and mentioned that any product, where the packaging was damaged, the vendor would have the choice to either give the product to the employees or destroy it.  With destroy being the preferred choice by most vendors (the justification being they didn't want to encourage "accidental"  employee damage).
     
    The instant you start donating things as well (especially other companies products), you start assuming liability.  I mean, even in the video those power bricks they showed looks like the classical power bricks that are rated at under 15A and get overloaded causing house fires.
     
    Sure, in an ideal world this wouldn't be happening, but it's wrong to single out Amazon...it's prevalent in likely many industries
  25. Agree
    wanderingfool2 got a reaction from LAwLz in Amazon in the UK, Wasting 124,000 items a week (tech/consumer waste)   
    I think the part that the article glances over is that of the "124,000" items destroyed it might not be up to amazon to decide (since it's not their product).  e.g. I send 10,000 items to the Amazon warehouse but only sell 1000 before giving the order to destroy them.  Amazon can't turn around and sell those 9000 items, they literally have to destroy them (sure, some they might be able to donate, but that is up to the seller to decide not Amazon).
     
    The other thing is things like masks, it doesn't matter if they were returned in what appears to be new condition...they aren't allowed selling them (imagine if someone resealed them after putting poison in them...because there are some sick people out there who would)
     
    The other consideration is the spread sheet they used for "124,000" an week seems to be counting by days, and has more than 7 columns to it...so I don't know if it should be relied upon that it's 124k a week.  It's also 124k including the amount that is sent to the recycling.
×