Jump to content

ichihaifu

Member
  • Posts

    19
  • Joined

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ichihaifu's Achievements

  1. We are often seeing over the top items, which normal (and even enthusiast) people realistically will never even consider buying. Instead, could we see more items like HDMI/DP switches, splitters, laptop/usb/thunderbolt docks, speakers, usb-storage etc. reviewed and compared? There are so many completely garbage items in these categories and it is impossible to tell what you are getting before buying and finding out, especially the first two categories are nigh impossible to find trusted reviews on, and they are items commonly used in WFH offices.
  2. Its actually not "all good". There is something really weird going on since some months ago (some people traced it as far back as to around windows 1903 release) with DWM and Nvidia graphics cards. I've come across two larger threads about it so far with little googling: https://answers.microsoft.com/en-us/windows/forum/windows_10-performance/desktop-window-manager-dwmexe-using-high-usage-gpu/a14dae9b-8faf-4920-a237-75ebac8073f5 https://steamcommunity.com/discussions/forum/11/1643170269568692270/ .. and a ton more when you do a simple 'desktop window manager high gpu' search There are some different symptoms about it that people are describing, but basically it narrows down to DWM getting stuck doing something it shouldn't, and causing insanely high GPU usage when there should be next to no activity. I'm also unfortunately sitting in the boat with this problem; Prior to playing any DirectX game I have no problems, but as soon as I fire up something (my test cases have been Monster Hunter World and Rimworld.. both performance extremes covered), my desktop becomes unresponsive thanks to DWM. Programs are still functioning, albeit they might as well be in a strangle hold given their performance. When I exit the game, I'm left with flickering and unresponsive desktop, that only goes away after I log out and back in (or kill DWM, but that also breaks almost every other open program, so relogging is almost faster, and definitely simpler). I'm nearing the point of completely reinstalling windows, but some posters had tried rolling back to previous windows and driver versions just to have the problem come back after a while, needless to say that makes me hesitate.
  3. This video made me realize something fairly obvious, but critical about Unraid: You will be limited to a single drive performance bottleneck when accessing data that is offloaded from cache, given the way it accomplishes storage pools. I suppose that is fair when you don't care about performance, or if your use scenario is one where you almost never interact with "old data". EDIT: I don't remember SSD cache being installed on this system, so no wonder the performance was choppy (and given very little screen time). In this use case it would've been always choppy unless there was striped raid SSD cache in front of spinners that accommodated the entire active data set.
  4. Original RPi could stream 1080p youtube videos and I was using RPi 2B as home theater for some time. No reason RPi 3B wouldn't be able to.
  5. Since its an android device, did you try out Steam Link app? Since Steam Link hardware was pulled from store, this could be a very real alternative with added features.
  6. Stability is only a small part of it. In fact, there is no real reason you could not just blindly pick up any other linux distribution and roll with it, if you only looked at stability. By far the biggest contributors are support channels and deployment/configuration tools that come with package, plus the availability of certification programs and free platform to demo and develop on. RHEL(Red Hat Enterprise Linux) is so simple to roll out and manage on large, combined with very good training programs they offer - that hiring capable workforce to develop and support the platform is relatively cheap, as opposed to hiring very experienced linux developers to work on other platforms. In the shadow of above, stability is secondary and will usually get considered afterward. Smaller shops can roll with CentOS until they grow large enough to need RHAT support, at which point they can easily upgrade to RHEL licensing models and become entitled for it. Main concern with IBM comes from their lack of innovation and toxic pro-profit business culture. If there is no profit to be had, it will get shaved. This makes freely distributed tools exetras mentioned prime targets for such cuts, if the business cannot find way to make profit out of them - which they absolutely will try, because this transaction is massive enough to go down in history books. There will not be any immediate impact, but this is at least as concerning as Article 13 in Europe, in the long term, and should be monitored with care. (Maybe not entirely on that scale, because general consumer will never see the impact, but on enterprise space it is very considerable news) Here is a news article that will hopefully make it a bit easier to understand the circumstances: https://gizmodo.com/what-will-become-of-linux-giant-red-hat-now-that-it-sol-1830074632/amp
  7. I'm not happy at all about this acquisition. Having worked with and at IBM, this can be very bad for RHEL based products, like very widely adopted CentOS and Fedora. (IBM is extremely pro-profit enterprise, which leaves no room for private non-commercial products.) Not to mention Ansible. Ansible has been a very promising and actually quite a bit used system automation and configuration tool in Linux enterprises. It would be a shame if it gets the blue hat treatment, i.e. slap a bill to every little version of it that is not AWX. There is also negative impact for existing RHAT customers, thanks to new and renewed contracts, support and licensing models that IBM brings with it. Commercial support. Also RHCSA and other certifications provided by Red Hat. Basically they have official education channels to their product.
  8. I can somewhat understand 1-2 day battery, when your use case involves morning/evening recharge. However, when you do not carry the charger with you on a business trip that lasts several days, or on a backpacking trip for an example, the devices become just useless wrist accessories. Its actually somewhat odd to me that proper battery life still to this date is not one of the primary considered selling points for wearable tech, and the optimizations towards it are only now starting to be what large quiet portion of 'waiting buyers' would consider useful (personal opinion).
  9. I've been looking for a smartwatch, but every single one of them seems just abysmal when it comes to battery life, even with the "best" ones boasting with 4 days of it - which I would consider pretty worthless. Then I remembered that most phones can actually squeeze quite a bit more of juice out of them if you set up the active components correctly, using my work iPhone SE as an example; It used to be getting barely 1 day of battery life by default, but after stripping away most of the active components, synchronizations etc. (while still keeping core components like mail, wifi, etc.. functioning), I managed to get up to 7-9 days of battery life depending on how much I read mail from it. Do smartwatches have a possibility for similar adjustments? EDIT: We can ignore Pebble and Amazfit bip, I'm aware of their existence, but apparently they are fairly inaccurate when it comes to a lot of the activity related features, so I'd be better off with just a regular fitness tracker if I wanted one.
  10. I do know (and if you simplify it, they are quite similar). As for hardware, not a huge amount: they already have all the necessary equipment to do a basic level SAN, they'd just need to rearrange it and redo the systems running on the hardware to accommodate that. Simply put, "yes". But if you see what I initially said, at this point it would be a waste of time because they are already imitating a fail-over. Edit: A very simplified image of what I had in mind:
  11. I recall that petabyte was only archive server, where frankly this (load balancing, as you put it) is not even needed. High performance storage to me looks like a 'janky hack' that only somehow achieves fail-over. It could all be integrated to a single SAN system and just made available as separate high performance pool to working hosts.
  12. I came here wondering the same. Most likely an oversight at design stages. Redundancy. Load balancing. Easier maintenance. They already kind of have something like this in place in case their primary share fails as I could see, but it requires them to change network maps, and I can see this being a major pain in the ass when it happens. If they do the redundancy part correctly, they could set up 2 storage boxes, split the high performance storage between them in separate storage pools and serve them to 2 separate CPU/GPU virtualization servers hosting their VM's doing video activities (split the resources in both cases, so in the worst case there is only degradation). This means you can move the servers doing work live from one host to another, while taking literally half of your storage down if needed, without anyone needing to do anything extra on their workstations. You could even go full gung-ho and fully equip both parts of the redundant parts with all current resources to avoid any possible degradation during maintenance if needed. But as I mentioned they already have something like this in place, so its probably something they don't want to dedicate time for (its basically redoing the entire infrastructure design, which would probably seem like a waste of time at this stage).
  13. What about Oneplus' 'Dash Charge'? It should be pretty much the same, but the power delivery is different as I understand.
  14. Is the assembly doable without specific tools (i mostly have just a couple of screwdrivers at hand)? I have time and patience to tinker a little, but I don't want to get additional tools.
×