Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Kisai

Member
  • Content Count

    3,936
  • Joined

  • Last visited

Everything posted by Kisai

  1. Probably because of how ARM lost control of it's subsidiary in China. Y'know, this one You know the story, you want to do business in China, so China makes you open a JV company, and just when you think everything is hunky-dory, the chinese company changes the locks on the front door so it's JV partners can't get in, and the Chinese partner loots the place of IP. Metaphorically speaking, as this has happened to every business that has ever setup in China and subsequently no longer has a Chinese presence. It also happens every time you outsource your core business (
  2. Reportedly people have been frying their 3080's and 3090's mining Ethereum, and nvidia changed the driver back in 461.09 to already do this throttle. So yeah, nvidia clearly knows how they can throttle Ethereum. https://blogs.nvidia.com/blog/2021/02/18/geforce-cmp/ https://www.theregister.com/2021/02/18/nvidia_gpu_mining_ethereum_crippled/
  3. Nope. https://www.eejournal.com/article/arm-saturation-price-hikes-and-possible-spinoff Like, they can't retroactively charge more. Once you've licensed an IP block, you can't take it back. Now the there are two alternatives: a) IPO, with no voting stock (all preferred shares), common stock (voting shares) are issued only to employees and must be sold back to the company if they leave the company. This is what companies like Berkshire Hathaway have, where the voting stock and the non-voting stock have different voting differentials, and thus prevents
  4. I'd expect all 30xx cards and future 40xx, etc, to likely have limiters of some kind to make GPU's inefficient at mining. Presently only 4GB+ cards are efficient at it, so that leaves the vast majority of cards that are only 1080p cards from being suitable. Like again, any countermeasures deployed now will have consequences later. Let's assume that the "memory loading" (which is what the current countermeasures are) is the only trick in the book, what happens when texture sizes get to 4GB? What about ML loads that are 8, 10 and 12GB right now? Let's just assume nvidia isn't stupid
  5. Probably because Softbank isn't a vertically integrated business. It's more like Berkshire Hathaway. Things they acquire just continue to operate as usual. Softbank itself also isn't a typical keiretsu either. Like in context all other keiretsu's in Japan own a bank, insurance, trading, manufacturing (eg car brands you might recognize) and a shipping company and are all intertied through the bank. Nvidia however is a vertically integrated business which puts them into the same position as AMD and Intel, but also the hated telecom networks in north america. Nvidia has more in common
  6. The safest answer is "yes... but" VPN's encrypt data between two points, they do not protect those on either end. So if you trust both end points, then the VPN is fine. Otherwise, your "VPN" doesn't really protect you from anything. Most non-enterprise VPN's are used exclusively for piracy, and to believe otherwise is to be incredibly naive. The operators can be compelled to turn over data if you are paying for it. Y'know, unless you've bought prepaid credit/debit cards months in advance from a different country just to do that. Because again, they can also compel the payment opera
  7. It's likely hardware that doesn't support it will either: a) Not work (eg the game might have a large texture set for DirectStorage only) b) Work, but have extensive load time (decompression done on cpu, and pushed over PCIe 3 x16 bandwidth) c) Work, but use smaller textures (eg game might get limited to 1080p quality) d) Transcode the textures in software to a lossy format the card already supports. B and D will result in longer loading time than where it would be had it been designed for a 1080p experience. Where as C might be lossy 2K textures using BC6H/BC7
  8. 8GB is barely enough to do ML in the first place. 1080p games may get away with 4GB on cheap cards, but the real issues is that there is not enough video memory on ANY card to do 4k or 8k, because the textures and video buffers increase by a square value, while VRAM only doubles. Like assuming you only had one 16k texture, that is 1GB of video memory. If you have a 4K video frame buffer that isn't in HDR, that's 32MB per buffer. HDR requires 40bits (10 bits per channel) and requires 40MB per buffer. vs 8MB (32bits), 10MB (40bits) The vast majority of video memory isn't
  9. It's very likely the very same 3090 card but with the smaller VRAM chips. https://www.techradar.com/news/your-nvidia-geforce-rtx-3090-might-secretly-be-an-old-rtx-3080-ti So if you put all the pieces together, one of two things are true: a) The 3080Ti was rebranded the 3090 due to some part availability issue (eg VRAM) b) The 3090 is rebranded the 3080Ti due to some part availability issue. So the only difference between them other than the scratched out GPU die code, is the VRAM amount. So who knows, maybe they were binning 3080Ti's from 3090's and ch
  10. Seems to illustrate just how fast and hard something gets dislike brigaded, but also note how seemingly a lot of it was bot traffic because two of the videos completely stopped getting dislikes at some point which tells me either they turned off likes, or privated the video at that point. Anyway, I thought it might interest y'all what the dislike switch looks like: This was the default on this channel. At a point in time, this used to be something that had to be manually unselected.
  11. Keep saying "this will never work" while ignoring the nvlink part of the solution. The point is, again, nvidia has incentives to protect it's data center solutions to make it so GEFORCE cards can't be used in high density deployments over their more expensive solutions. Removing the nvlink from the GEFORCE card should have been a sign that no additional GPU's will be supported to run together. You can't initialize nvlink on a card that doesn't have it, and thus it could be configured in the gpu die, that it doesn't power on when it's not in the cpu-connected slot at x16. There are
  12. As I said. 1. Do not initialize the card's CUDA if it only sees a 1X mode, it would need to be electrically connected at 16x with all lanes connected to the CPU 2. If it's not connected at x16, look for the nvlink connector presence, and if it can't talk to another card in x16 mode, shutdown and stay in a boot-loop (unable to be initialized) A VM doesn't solve this. Gaming systems will always have 1 x16 slot connected to the CPU, while cheaper systems will either not have the the x16 slot, or have the lanes dedicated to the m2 slots. If nvidia is producing har
  13. Probably not. 13" laptops have a life-span of about 2 years, and U chips are phenomenally underpowered. Assuming you can get one with a working battery for cheap (like in the under $1000 price tag), it might hold you over, but if you want something that at least has the possibility of gaming or doing any productive work, get a 15" laptop. 12/13/14" laptops are entirely for travel, not for work. Most people who get away with them for work, are doing entirely email/office suite stuff, not data entry, not programming, not engineering. I'd probably suggest the latitude 7490
  14. Maybe if nvidia wasn't making mining gpu's, it doesn't need to be an ideological argument, it can simply be "business trying to maximize their profit by restricting their own products", which they do already with Quadros and data center gpu's. I don't think they should bother trying to limit the functionality on the Geforce cards, but they are being pushed to do it anyway probably by business units responsible for the data center. Or did you forget about this https://www.kitguru.net/components/graphic-cards/joao-silva/blower-style-rtx-3090-graphics-cards-are-being-discontinued/#:~:
  15. The problem is that automated driving is trying to solve the wrong problem. Grade-separated high speed rail between cities and between municipalities within metropolises is REQUIRED first, because that gets a lot of unnecessary commuter traffic. Automated driving requires the same level of automation AND then adds the possibility of street-level conflicts with non-automated traffic light crappy ICE cars, crappy Light rail vehicles and jaywalking pedestrians. Given how the Ford Pinto was handled, you absolutely don't want the trolley-problem to be decided by the computer. There needs to be a ce
  16. Did you ignore what I said on 1 and 2? For case 1, the CARD should look for PRSNT pins and does not initialize CUDA unless electrically connected at x16. Gaming GPU's are always installed in the x16 slot, even if they operate at x8. For case 2, the card does not initialize additional nvidia cards unless they are connected via nvlink. This is something polled from the card's side. eg 1. PCIE x16 slot MB power, card immediately polls for the nvlink, and if not attached, does not initialize if no monitor (not EDID) isn't attached, this can be tested f
  17. nVidia could quite literately release incremental updates so that each "stepping" of the same card has additional hardware locked countermeasures. Like if I was "nvidia the gamer-friendly, miner hating" I'd take a lot more steps (that would have collateral damage): 1) Card connected at 1x: Nerf CUDA (Direct Compute, Vulkan, OpenCL, etc), lock-in nVidia ML libraries only (eg Tensorflow) 2) Card connected to no monitor, or monitor with no EDID (eg dongles): Nerf CUDA if nvlink not connected to another nVidia GPU 3) Card undervolted/underclocked: Nerf CUDA entirely 4) Multiple
  18. Apple will never change this policy because it want's its devices (just like Nintendo) to be family-friendly. Sony and Xbox, despite having some very-close to R-18 games on them, also do not permit that content. The irony is that Apple is as pushy as 1990's Nintendo is when it comes to games that are on their platform. Google's ad policy is ALSO even more pushy than Apple and Nintendo is. I don't know how often I'd get reports from Google about "sexual content" on a page, only to go to the page and see nothing worse than sideboob/butt-crack type of "nudity" in a cartoon context.
  19. Considering how few of them are showing up in the Steam Survey, I'd say the vast majority of GTX 10xx, RTX 20xx and RTX 30xx GPU's that were headed to retail wound up being put into Ethereum mining rigs, the cards at the top of the survey are all unprofitable to mine on. I do want to point out that the only GPU that is showing up in OEM systems (eg Dell, HP) is the RTX 3070, which explains it's position on the list. So pretty much no 3090's or 3060's found their way into gamer PC's. They're hovering around the 4th gen Intel iGPU's for distribut
  20. The camera resolution often doesn't matter unless you are doing lectures. If you're doing a lecture and have props or other objects you will need to show, then yes a YUV2 1080p or 4k webcam (USB3) is better than the typical 720p mjpeg (USB2) camera in laptops. However if you are actually in an environment where you were say, doing a TedX Talk or a lecture hall in a university, you would want someone to actually operate a camera on a tripod or steadycam. If you are just having 3-24 people on the same call, it's often pointless to have a >720p camera because the stream will only s
  21. Even if you buy a prebuilt (eg not a Dell or HP, but something like cyberpower) you aren't getting away from scalped GPU and CPU prices. Dell and HP locked in prices years or months in advance so they have parts at MSRP but unfortunately they also don't have the configurations people want, so you would have to settle for something that you'd throw expensive parts away to upgrade. Like, if you can wait 2 years, just buy something that will hold you over (eg laptops tend to be a better deal), as all the PCIe5/DDR5 equipment should be out by then as well. Otherwise unless you haven't
  22. Both the CPU and GPU are unobtanium, and the price quoted by both are likely coming from Amazon scalpers. That said, it feels like you may have just picked the most expensive part for a few of these, so I'd suggest re-evaluating if you need those parts (eg WiFi on the motherboard, 1200w psu, 10 120mm fans) In general, any "new build" should start with the CPU choice, picking the MB that goes with it (both ASUS and ASRock are considered high end brands, but not all boards are made equal, with usually the $500 boards having barely any advantage over the $300 boards, just
  23. Pretty much. Like any P2P messaging system is preferable to a centralized one, however even then (See Jabber) without some kind of centralized identity federated storage, there's no way to prevent spoofing which is is the present problem with SMS and phone calls. What will end the existing POTS system won't be people all switching to wireless, but a unified service like Jabber replacing phone numbers with whitelisted (eg friends lists) contacts so that unverified and unexpected contacts will not be able to contact and spoof their identity in the first place.
  24. Their intentions are to take 100% of the profit and cut Apple entirely out of their customer acquisition commission. This is literately nothing new. Remember such charming attitudes such as: https://www.forbes.com/sites/markrogowsky/2015/12/10/walmart-drops-an-atomic-bomb-on-its-applepay-competitor/?sh=5e90eef126a4 Where is CurrentC now? https://arstechnica.com/information-technology/2016/06/currentc-retailers-defiant-answer-to-apple-pay-will-deactivate-its-user-accounts/ All CurrenctC did was delay adoption of NFC payments by two years. Oh and Walmart sti
×