Jump to content

Razor512

Member
  • Posts

    428
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    Razor512 got a reaction from wanderingfool2 in I spent two days in my attic for this...   
    To delve more into total cost of ownership, most cloud services become far more expensive than building an NVR. With cloud reliant cameras the subscription fees, especially if you want 24/7 recording, can quickly become very expensive. Furthermore since these are utilitarian items, they are used for years. Think about it, if you are installing security cameras, how long do you think you will need the surveillance (this means after the time period, you will remove the cameras and never need to monitor your home again)? if it is more than 2 years then the cloud solution will end up costing more than an NVR solution.
     
    Not only that, but you will have more points of failure as anything that goes wrong with your WAN connection, or anything that goes wrong with the server that the company is running, will cause a loss of the core functionality for you.
     
    Furthermore the cloud solutions are always lower quality, for example look at any of the consumer cloud cameras that advertise 4K, they all keep their peak bit rate to less than 2Mbps per camera, thus they never seem to have any fine detail. Because they target a large market where most people will be stuck with hellishly bad ISPs, e.g., comcast, they need to cater people who have 5-10 Mbps upload speeds and may want multiple cameras. Furthermore, they are not interested in the infrastructure needed to have a cloud service that will pull in a 5-10Mbps stream per camera, as that will cut into their profit.
     
    Finally you are stick with the issue of cloud reliance and the compromising position it puts you in as a customer.
     
    For example, suppose you buy the $400 nest 4K camera camera and you are somehow fine with spending $12 per month for service for it, then 6 months later, Nest execs decide that it is really not fashionable to only have 2 yachts (you wouldn't wear the same 2 shirts for the week, why should you have to use the same yacht each day?), then they decide that your service fee is now going to be $29.95 per month. What do you do? You can't access the camera locally.
     
    What happens if the company that makes your IOT hardware, pulls a wink: https://www.theverge.com/2020/5/6/21249950/smart-home-platform-wink-monthly-subscription
     
    Or pulls a revolv: https://www.pcworld.com/article/3051760/why-nests-revolv-hubs-wont-be-the-last-iot-devices-knocked-offline.html
    Or a nextbit: https://www.theverge.com/2018/1/9/16867380/nextbit-smart-storage-cloud-service-shut-down-robin-phone
    Or an Eyefi: https://petapixel.com/2016/06/30/eye-fi-brick-older-wi-fi-cards-photographer-arent-happy/
    (few random companies, though there are tons who screwed customers)
     
    Overall, by having a local access option and your own NVR, you have more piece of mind in the security of your investment. You have a lower total cost of ownership, after the initial upfront cost, you have no other ongoing fees on hardware that you can use for many years. There is no risk of a company holding your hardware ransom in order to make you engage in the sunk cost fallacy and dump more money into subscriptions.
     
    There is no risk of your hardware becoming an expensive paperweight because the company that made it decided that the device is "end of life", which is another term for "sales for this device have leveled off, and thus it is no longer profitable to continue pouring money into the cloud services for it".
     
    You will have better overall quality in the case of cameras since there is not as much pressure to compromise on video quality in favor of lower upload bandwidth use.
     
    When your hardware relies on a single company for service, then they have a monopoly, and will behave as such. This is especially the case as people are more likely to give in to a bad subscription deal if the alternative is having their expensive hardware become almost completely useless.
     
  2. Agree
    Razor512 got a reaction from Origami Cactus in I spent two days in my attic for this...   
    Depending on the codec, 4K can use quite a small amount of data if there is virtually no movement. Depending on the NVR you can often get recording of around 1-1.5Mbps when there is barely any motion in the scene, and then you can have things jump up to around 20Mbps during motion. it all depends on how it implements the VBR.
     
    In either case, it is best to have enough storage for a few weeks of 24/7 recording from all cameras. Even with high end systems, motion detection is never perfect and motion based recording where it records until motion stop can have many false negatives where the recording stops at an inopportune time.
     
    On the other hand nearly all modern NVR software will do motion highlights on a 24/7 recording timeline, and when you need to save a clip of something, you specify the time frame that you want to export, or they will have instantly available 30 minute or 1 hour chunks, as they do no do 1 continuous file that is 24 hours long. This is the safest solution as when motion detection works perfectly then you have perfectly highlighted clips that you can export, but when there are false positives or false negatives, you do not experience any of the negative outcomes from them. If it detects motion too late, then just go further back in the timeline to see the true start of the activity. If it stops detecting motion before the true end of the activity, then you just watch further into the timeline, there is no downside.
  3. Agree
    Razor512 got a reaction from TechyBen in Red's Overpriced "Mini Mag" Cards - The Real Story   
    Some companies offer recovery services while targeting business customers, e.g., https://iosafe.com/
  4. Like
    Razor512 got a reaction from TechyBen in Red's Overpriced "Mini Mag" Cards - The Real Story   
    The docking connector used is likely for the purpose of the self cleaning contacts where the contact surfaces apply a lot more force and are made of a thicker material than gold plated PCB traces on an m.2 card.The hiding of the connector type is likely secondary to having a connector that is less sensitive to a little dust other stuff getting on the connector. Furthermore, if the device ever needs servicing, then all they need to do is swap the adapter. I don't have a problem with them using those adapters since  the connector types have known utilitarian benefits over a standard m.2 connector when it comes to reliability and not needing to be cleaned as often.
     
    For the data recovery services, that does not justify the pricing since there are many external drives sold that include "free" data recovery" that don't have anywhere near such an astronomical markup. (most are sold at a 150-200% markup, and their enclosures are often water and fire resistant and often come with a 5 year data recovery guarantee. The reason they can offer that is most people will never use it, and like home owners insurance, far more people pay into it then ever use it, thus the costs + profits can be spread out over many people. The only way their data recovery could justify such a large markup is if virtually every customer is experiencing failures that warrant the data recovery service to be used.
     
    But beyond that, recovery is no excuse for redundancy. Even the best and most expensive recovery services are regularly unable to recover 100% of the data, and in some rare cases, nothing can be recovered.
  5. Like
    Razor512 got a reaction from TechyBen in Red's Overpriced "Mini Mag" Cards - The Real Story   
    Wanted to also link this here http://www.reduser.net/forum/showthread.php?150422-All-Out-of-1TB-MINI-MAGs&p=1702571&viewfull=1#post1702571
     
    " The good news is even though the prior-gen drive was 512GB, we were able to preserve nearly the same usable space for you guys on the 480GB. "
     
    https://www.corporationwiki.com/p/2m11n6/brent-carter
     
    It seems at some point they just started sticking the 480GB SSDs in the 512GB enclosures,having the camera firmware read specific SSD models as 512GB regardless of their actual capacity, they would be labeled as a red minimag 512GB, though the available storage space reading would reflect that of a 480GB drive.
     
    Thus customers purchasing the 512GB during that period, paid extra for a 480GB drive compared to just getting the 480GB model.
  6. Like
    Razor512 got a reaction from TechyBen in Red's Overpriced "Mini Mag" Cards - The Real Story   
    More of that is delved in this video.
     
    Wanted to also add, for the 480GB drive in the video,there is no telling the original capacity (assuming a defect free die) since no datasheets are available.
     
    https://www.ftc.gov/tips-advice/business-center/guidance/complying-made-usa-standard
     
    From the forum posts, it also appeared that at a certain time, the 512GB labeled drives were more expensive than the 480GB drives, even though they started to place 480GB drives into the 512GB enclosures.
  7. Agree
    Razor512 got a reaction from TechyBen in Red's Overpriced "Mini Mag" Cards - The Real Story   
    When I mentioned the 1000 vs 1024, I was not trying to explain what RED was doing, instead I was pointing out an area of the market where formatted capacity can be different and it is still accurate.
     
    In the case of RED, in the original video, the previous drives that contained true 512GB drives, actually had more space, The staff there stated a 5% difference.
    a 480GB SSD that may contain 512GB of raw NAND does not mean that all 512GB works. One thing that is very common with NAND production is an attempt to sell every functional package on the wafer, and that often comes in the form of dies where some NAND cells are defective to be sold as a lower capacity. Furthermore in the case of budget SSDs like the kind shown in those videos, are unlikely to do something like use 32GB overprovisioning, even enterprise SSDs don't go that far at that capacity level. Anyway for the storage comments it just reminded me of the UK parliament session where loot boxes suddenly became "surprise mechanics"
     
     
    Edit: for the scam comment, it is based on if a higher price is justified due to misleading information. For example, many people in the US are willing to pay more for something made in the USA, if you lie about where a product is made, then you can likely save on cheap labor from China, while getting the profit bump from people willing to accept a higher price of an item made in the US.
     
    Same if you try to market a 480GB drive as 512GB.
  8. Agree
    Razor512 got a reaction from TechyBen in Red's Overpriced "Mini Mag" Cards - The Real Story   
    Seems like Linus is a little too biased towards RED for this as he applies his hindsight as reason why everyone should think the same way.
     
    For example, they market the minimag and talk about it publicly as if it is some highly advanced custom storage solution designed specifically to meet the needs of their cameras with uncompromising reliability which as with all custom low volume technology that must meet high endurance standards, carries a steep price.
     
    They use that aura of false information to trick people into accepting a massive markup. Think how a bottle of oil extracted from an animal that is worth next to nothing, can suddenly be worth a lot of money because someone is able to market it in a way that gets people to think it will improve their health and prevent diseases.
     
    The minimag is an overpriced scam, there is no other way to put it. Please also understand that a scam doesn't mean that something doesn't work at all, it can also be from misleading the customer for the purpose of getting extra money from them.
     
    Think selling someone a lamp and that comes prebuilt, but you as the store still charges the customer an assembly fee.
     
    Next, they violate the regulations surrounding terms like Made in the US. Comparing US law to UK law is literally like comparing apples and oranges.
    What RED did was literally exactly following the example the US government used to describe what a violation of the regulation would look like.
    This all further feeds into the reasoning for them essentially engaging in fraud when it comes to how they represent the product in order to justify a higher price.
     
    The first reply from the owner of the company stated that the SSDs have special firmware, then when it was proven that they did not alter the firmware in any way, they then back racked and stated that the firmware is really in the camera in how it writes to the SSD, and it is suppose to write in a way that makes data written in a better way to the NAND (which cannot happen with the standard firmware in the SSD). Furthermore that reasoning is 100% flawed as if the firmware is all in the camera, then the markup in the camera is what covers the firmware cost, the SSD should not have such a markup.
     
    Also you claimed they did not demonstrate it recording, just as you proceed to also not demonstrate the issue you claim exists if you use a different SSD. Both claims remain unsupported, and verification would require someone to purchase the same model of SSD and swap it in their minimag and do a test recording.
     
    PS, for the over-provisioning/ spare NAND, all SSDs have that, if you truly believed that statement then you would report the 512GB Samsung 970 pro as being misleading since it has more than 512GB of NAND due to the over-provisioning and spare NAND, no SSD is sold based on the RAW NAND, it is sold based on user addressable NAND, they may use different interpretations of the space, e.g., 1000MB per GB instead of 1024MB per GB, but it is based on how many bytes the user can actually use. PS, a 480GB SSD will not have 32GB+ of spare NAND.
     
    Raw NAND cannot be used, especially since for most low capacity drives will have NAND dies that would represent a much higher capacity drive but due to defects, only a small amount of the space is usable. For your statement to be valid, then it would also be valid and legal a company like AMD to have taken their dual core Phenom CPUs (AMD Phenom X2) as quad core CPUs since they technically had 4 cores, in many cases you could actually use all 4 cores if you unlocked them in the bios and down-clocked the CPU significantly for cases where some cores were not stable at stock speeds.
  9. Informative
    Razor512 got a reaction from Lathlaer in Upgrading our WORST Wifi Setup - NETGEAR Nighthawk Pro Gaming Router DUMA OS Showcase   
    If set up right, it can offer good performance, I use it as a wired access point, but under wireless, if at a distance of around 10-12 ft line of sight, and a wired client connected to the X6s, you can get around 600-650Mbps on the 2 stream radio backhaul, or around 900-940Mbps if using the 3 stream radio as a backhaul. While it will certainly drop with longer distances, the benefit of it is pretty much the same as using another wireless router that offers a wireless bridge mode.
     
    Basically, you can maintain a higher PHY rate at distance due to better antenna gain in these devices, as well as both endpoints being able to use close to a 1 watt transmit power, while most client devices will be at a 50-100mw transmit power.
  10. Informative
    Razor512 got a reaction from Fnige in I waited 2.5 years for this folding monitor…   
    I wish someone could just create a decent high res projector that was focused on endurance and lifespan rather than color quality.
    The main issue is that no one seems to make projector that is designed for continuous use (bulb life, even on the LED ones are rather low).
     
    One thing I experimented a while back was setting a projector as a 3rd display, set the background to black, then position the projector so that you just display various system stats on the wall surrounding your monitors.
  11. Informative
    Razor512 got a reaction from GNU/Linus in I waited 2.5 years for this folding monitor…   
    I wish someone could just create a decent high res projector that was focused on endurance and lifespan rather than color quality.
    The main issue is that no one seems to make projector that is designed for continuous use (bulb life, even on the LED ones are rather low).
     
    One thing I experimented a while back was setting a projector as a 3rd display, set the background to black, then position the projector so that you just display various system stats on the wall surrounding your monitors.
  12. Agree
    Razor512 got a reaction from GNU/Linus in I waited 2.5 years for this folding monitor…   
    Try running a display calibrator on it. the x-rite ones can handle front a rear projection.
     
    Use dispcal and measure thew accuracy and gamut https://displaycal.net/
     
    Be sire to leave the white balance set to native otherwise it will kill the brightness.
     
    Beyond that, it is insanely overpriced given the display size it is designed for. I just wonder how it compared to other projectors in terms of color reproduction if it were to be properly calibrated.
  13. Informative
    Razor512 got a reaction from Origami Cactus in I waited 2.5 years for this folding monitor…   
    I wish someone could just create a decent high res projector that was focused on endurance and lifespan rather than color quality.
    The main issue is that no one seems to make projector that is designed for continuous use (bulb life, even on the LED ones are rather low).
     
    One thing I experimented a while back was setting a projector as a 3rd display, set the background to black, then position the projector so that you just display various system stats on the wall surrounding your monitors.
  14. Informative
    Razor512 got a reaction from Tohnz in Just how FAST is WiFi 6?   
    The coverage issue is largely due to government regulation that ultimately makes issues worse. For example, the FCC limits WiFi radios to 1 watt, and any amount whatsoever above that will result in huge fines. Due to this, many AP makers will limit the transmit power to a good margin below the 1 watt limit.
     
    While when dealing with mobile phones, it will not make sense to try and do a 10 watt AP and expect a phone to be extremely far and connect reliably since it will not be good too have a phone also do a high transmit power. Most smartphones will do much less than 100mW.
     
    The thing is that since a stationary access point does not have the same power and design constraints, they can be made to use a higher transmit power, while also using components which allow for a better signal path, and better receiver sensitivity. This is why we can have a smartphone with an 80mw transmit power, actually getting better range when connected to a router such as the Netgear R7800 with a transmit power of over 900mW as compared to a router with the same gain antennas but with a 500mW transmit power.
     
    Mobile devices often have a negative gain antenna since due to space constraints and form over function requirements, thus having an AP that listen better and shout louder, will mean a longer effective range.
     
    If you have a router running 3rd party firmware that truly allows you to adjust the transmit power you will find that if you connect to a good AP but set the other endpoint to a transmit power of something like 20mw,it will often still connect just fine to a strong AP at the other side of a house, but at that transmit power, a smartphone may fail to connect at all.
    Imagine if the FCC lifted the limits, and router makers pushed out firmware updates to increase the transmit power (sine the datasheets for the RF amplifiers used in most upper end routers can handle multiple watts). With a higher transmit power, you may find that you no longer 3 APs, you may be able to get away with just 2 while also getting rid of the dead zones, or if a home is mostly wood and a very low noise floor, you may be able to get by with just 1 strong AP.
     
    It would also reduce cases of someone having multiple APs in a home each using different channels (especially with wireless mesh networks), where the user will effectively be using 60MHz worth of spectrum on the 2.4GHz band, and all non-DFS spectrum on the 5GHz band.
     
    With the above in mind there are still some benefits to having multiple APs, and that is due to certain aspects of the signal where there are diminishing returns such as maintaining a high QAM rate at range, higher transmit power improves the range at which you can maintain 256QAM, but it does not scale linearly in the real world, especially in cases where the client devices are not also improving their transmit power. If you need max speed to another local device, e.g., if you run backup software on your phone (I use foldersync for android) where you will want all of your newly created user data to be copied over to your NAS, then there is a benefit to having an AP in the same room that you are charging in, as that will mean that your phone may backup at 400-500Mbps rather than 100Mbps to a distant AP.
     
    Did a quick benchmark, under a 2 stream WiFi radio using 802.11ac, this is around the best you will get under ideal real world conditions  (basically 5ft and line of sight of the AP).

     
    You will get fluctuations from any interference as well as if the device decides to do anything in the background, but 802.11ac can still provide good speeds.
  15. Like
    Razor512 got a reaction from kirashi in Just how FAST is WiFi 6?   
    Since given enough streams, the 802.11ax is fast enough to reach over 1Gbps  for a single client, a faster switch is needed, especially if you have a NAS and at least want to take advantage of that speed.
     
    Keep in mind that under good conditions with that device and 802.11ac on the 5GHz band and 80MHz channel width, it can hit about 500Mbps upload and download.
     
    I feel a perfect balance of cost and performance would be for someone to make a switch that has multiple 2.5GbE ports along with a single 10GbE uplink, which will allow multiple APs around a home to maintain good speeds to the rest of the local devices on the network.
     
    Ideally  it would also be from a company other than Cisco, or at least a product line that is not part of their Meraki line where they use their cloud reliant crippling to price gouge customers by getting them to invest in hardware, and then get stuck with a subscription that increases over time thanks to the sunk cost fallacy making it easier for people to accept a higher monthly fee than it is to ditch the hardware.
  16. Like
    Razor512 got a reaction from Lurick in Just how FAST is WiFi 6?   
    The coverage issue is largely due to government regulation that ultimately makes issues worse. For example, the FCC limits WiFi radios to 1 watt, and any amount whatsoever above that will result in huge fines. Due to this, many AP makers will limit the transmit power to a good margin below the 1 watt limit.
     
    While when dealing with mobile phones, it will not make sense to try and do a 10 watt AP and expect a phone to be extremely far and connect reliably since it will not be good too have a phone also do a high transmit power. Most smartphones will do much less than 100mW.
     
    The thing is that since a stationary access point does not have the same power and design constraints, they can be made to use a higher transmit power, while also using components which allow for a better signal path, and better receiver sensitivity. This is why we can have a smartphone with an 80mw transmit power, actually getting better range when connected to a router such as the Netgear R7800 with a transmit power of over 900mW as compared to a router with the same gain antennas but with a 500mW transmit power.
     
    Mobile devices often have a negative gain antenna since due to space constraints and form over function requirements, thus having an AP that listen better and shout louder, will mean a longer effective range.
     
    If you have a router running 3rd party firmware that truly allows you to adjust the transmit power you will find that if you connect to a good AP but set the other endpoint to a transmit power of something like 20mw,it will often still connect just fine to a strong AP at the other side of a house, but at that transmit power, a smartphone may fail to connect at all.
    Imagine if the FCC lifted the limits, and router makers pushed out firmware updates to increase the transmit power (sine the datasheets for the RF amplifiers used in most upper end routers can handle multiple watts). With a higher transmit power, you may find that you no longer 3 APs, you may be able to get away with just 2 while also getting rid of the dead zones, or if a home is mostly wood and a very low noise floor, you may be able to get by with just 1 strong AP.
     
    It would also reduce cases of someone having multiple APs in a home each using different channels (especially with wireless mesh networks), where the user will effectively be using 60MHz worth of spectrum on the 2.4GHz band, and all non-DFS spectrum on the 5GHz band.
     
    With the above in mind there are still some benefits to having multiple APs, and that is due to certain aspects of the signal where there are diminishing returns such as maintaining a high QAM rate at range, higher transmit power improves the range at which you can maintain 256QAM, but it does not scale linearly in the real world, especially in cases where the client devices are not also improving their transmit power. If you need max speed to another local device, e.g., if you run backup software on your phone (I use foldersync for android) where you will want all of your newly created user data to be copied over to your NAS, then there is a benefit to having an AP in the same room that you are charging in, as that will mean that your phone may backup at 400-500Mbps rather than 100Mbps to a distant AP.
     
    Did a quick benchmark, under a 2 stream WiFi radio using 802.11ac, this is around the best you will get under ideal real world conditions  (basically 5ft and line of sight of the AP).

     
    You will get fluctuations from any interference as well as if the device decides to do anything in the background, but 802.11ac can still provide good speeds.
  17. Informative
    Razor512 got a reaction from Lurick in Just how FAST is WiFi 6?   
    From what I have seen through user complaints, they would refer to the random price hikes within the useful life of the hardware that force s you to either endure the assault on your bank account or ditch the hardware that is otherwise still good and adequate for your needs.
     

  18. Informative
    Razor512 got a reaction from Results45 in Just how FAST is WiFi 6?   
    From what I have seen through user complaints, they would refer to the random price hikes within the useful life of the hardware that force s you to either endure the assault on your bank account or ditch the hardware that is otherwise still good and adequate for your needs.
     

  19. Like
    Razor512 got a reaction from Lurick in Just how FAST is WiFi 6?   
    If LTT is able to get in contact with anyone at intel, please see if you can get a review sample of the Intel AX201 and check if it will be a non-CNVi version.
  20. Agree
    Razor512 got a reaction from Arid in Corsair One   
    As a comment on the mentioning of the warranty, it is important to understand that the warranty on the prebuilt system is objectively worse.
     
    When you build a PC, you get multiple individual warranties, all of which are longer than the warranty provided by corsair.
     
    They provide you with a warranty that is 2 years long, but requires you to effectively mail the entire system in when something fails.
     
    On the other hand, with a system you build yourself, you are often getting:
    A 3 year warranty on the motherboard, and CPU.
    A 5-10 year warranty on the SSD (depending on model).
    A 3-5 year warranty on the hard drive.
    A lifetime warranty on the RAM.
    A 5-6 year warranty on the liquid cooler.
    Quality power supplies often have a 10 year warranty.
     
    In the past, it used to be that some parts from a prebuilt system, could get warranty service if you used the serial number of the item (they would consider the warranty start as the date of manufacture instead of date of purchase, if no proof of purchase.
    Today, you cannot find any parts like that, instead they all now use special serial numbers that indicate that the part came as part of a prebuilt system. These parts are often significantly cheaper, as they are sold to these companies without having to take into account support costs, or RMA services.
     
    Overall, when you go prebuilt, you are getting a system that is often less serviceable, especially if they have clauses which can cause you to lose your warranty by opening the system., thus they are less likely to get regular maintenance such as cleaning out the dust.
  21. Like
    Razor512 got a reaction from LTTmurphy in The Cheapest Tablet on the Market ($37)   
    One thing to note is that the desktop version of reddit actually loads faster and actually requires less data than the mobile version of the website, while offering more functionality.
  22. Agree
    Razor512 got a reaction from DaiGurenMK42 in The gaming PC days are NUMBERED! (Sponsored)   
    Their terms of use seems to be against tasks like that.
  23. Informative
    Razor512 got a reaction from JCHelios in The laptop you CAN'T buy - LG Ultra PC   
    Some companies do use low throttle points in order to reduce the noise levels and potentially improve battery life by getting the device to run at a lower wattage while advertising a higher end CPU.
     
    Dell does this on some of their 2 in 1's, for example, on the venue series, they set the thermal throttle point at 65C, though with a hex edit to some of the CMOS data, you can increase it to 95C which the device will not get close to because of the TDP limit.
  24. Agree
    Razor512 got a reaction from TechyBen in iMac Pro Review – a PC Guy’s Perspective   
    When pricing a build today, keep in mind that most of the price gouging we see are happening at the retail level and not from the OEMs.
     
    For a workstation, it is a bad system from both a performance and value perspective.
    If the system is used for professional tasks which make you money, then the most important factors are performance for the money, total cost of ownership, and future costs from upgrading. On a PC based workstation, roughly half of the hardware will be reused when it is time for a major upgrade.
     
     
    Any computer that is going to be used for actual work, should never throttle at full load when new. If it can throttle while new, then that means that the cooling is inadequate, and even if they increase the fan speed, the cooling system has no thermal headroom, thus even with a slight amount of dust buildup on the heatsink, you will directly lose performance, you will essentially have to clean the heatsink every month or 2 to maintain your performance.
     
    Beyond that, a prebuilt system is the wrong way to go when it comes to a workstation, as you are more likely to have downtime when something goes wrong; this applies to both PC and Mac. For most prebuilt systems, if you do not have a service center close by, you will often have to mail the entire system in for a repair, even if the issue is with 1 component that is not a complete show stopper.
     
    You also end up with shorter warranties. If you look at the warranties on the individual components on a DIY system, you will notice that most of them are in the 3-5 year range, with a few like the power supply, and SSD in the 5-10 year range, and things like RAM having a lifetime warranty. With a prebuilt system, you lose all of that for probably a 1-2 year warranty that may require you to mail the entire system in for repair (where you will have to pay for shipping to them).
     
    System builders can often get their components for far less money than a store like amazon or newegg can get their inventory, as those components do not have the cost of a warranty and other aftermarket services built into them.
     
     
    Beyond that, for a video editing system, 1TB of internal storage is not enough, and for applications like adobe premiere pro, even with a very fast NVMe SSD, they still get a performance boost by having the scratch disk, and and source media on separate drives due to the way it access both (simultaneous reads and writes). With the imac, you will have to buy an overpriced thunderbolt drive enclosure in order to install a few extra SSDs, at which point you will see that there are no thunderbolt NVMe enclosures, thus you will have to make due with RAID 0 SATA or mSATA SSDs, where you will not get the IOPS of a decent NVMe SSD.
     
    Overall, every attempt to expand beyond the self contained unit, will require you to spend a lot extra and make compromises.
    With a system you build yourself, you will have the ability to install multiple NVMe and PCI express SSDs for a raw 8K workflow (without having to wait for and buy expensive enclosures that will clutter your workspace), and you will be able to have proper cooling so that you don't throttle under a heavy workload.
  25. Funny
    Razor512 got a reaction from Jtalk4456 in They're Building a REAL Nuclear Fusion Reactor! - Holy S#!T   
    If they could build a core i7 8700k out of components that can handle more heat, then couldn't they use that and overclock it in order to get the temperatures of the hydrogen atoms high enough?
×