Jump to content

Zulkkis

Member
  • Posts

    312
  • Joined

  • Last visited

Everything posted by Zulkkis

  1. The cards are virtually in the same ballpark of performance so if it wasn't for the drivers you probably wouldn't even know which one you had running in your system.
  2. Caseking.de EK directly Alphacool directly Cablemods (pricy...) direct eBay
  3. Fractal Define C Corsair 400C Compact Lian Li PC | -J60 | -A05FN | -A05FNB | -A55 | -A51 | -A56 | -TU300
  4. Have you actually ever tried to rotate the case back into the normal orientation? From what I've heard is that natural convention isn't really a thing in a PC case because the fans are much stronger than that. I have a feeling that the case's air cooling performance has next to nothing to do with the motherboard orientation and net to everything to do with using extra-thick 180 mm fans to pull the air in. It could be interesting if there was a case that actually had a standard orientation with the same intake setup and you wouldn't have to have all the issues with a cumbersome I/O access. Could be even a case project at some point.
  5. We are getting much more efficient with energy all the time, but the problem is, that we find all these new uses for electricity, so the total consumption actually goes up. One essential part around the world is construction quality of homes (lack of multi-layer windows and other stuff that should be a standard), but even there the gains aren't going to change the trend. Distributed generation means generally smaller plants but wind and solar aren't going to make the infrastructure costs go down, rather, they shoot it up. If the wind is not blowing, you have unusable capacity. That capacity has to then be compensated somehow or you have to start taking some drastic measures to ensure that at least a part of the grid gets electricity that matches the standards, for going significantly over or under the regulated frequency is going to break things. Can you imagine i.e a restaurant just shutting down at a certain time, or a movie theater? How exactly would you realistically do this? Data shows that human well-being goes up with electricity usage. More and more parts of the world are joining that fray. Equity concerns are huge, even if a system was created where the bill is proportional to income it would be gamed with. Yes, but battery technology improvements are minimal and we don't even have promising theoretical models that could lead to a breakthrough. As it stands, renewable energy par Hydro does not bode well for grid usage. Averaging out the power doesn't really work; if power does not meet the demand, production needs to ramp up (to which the grid then needs to be scaled for as well as the production) or demand needs to go down - which essentially means cutting a part of the grid out altogether. That's why some factories have contracts where the power company will actually pay the factory to shut down production if needs be. A smart grid enables the use of distributed electricity problem, but it does not get rid of the issues related to the intermittent nature of it. Only good, long-term storage could, because the power is more or less seasonal. Renewables have a place, for example in areas that are completely off the grid, and getting the grid wired would cost more than the installation of a system that would meet the rather small power demands. I was working in a business that sold solar panels among other things, and one place where it made sense was summer cottages. Getting the wire would have cost too much, but summers are bright and heating is not needed. So, with a couple of small LED lights, a recharge point for mobile devices, and a DC fridge along with batteries and a couple of panels you could even keep the system running. It wasn't cheap, but in many cases, cheaper than the grid. The whole idea starts falling down when people would start looking it as a permanent location, because especially winter would cause a lot of issues. If it was up to me, nuclear would be built up much faster to address the demand for electricity. The waste issue is blown out of proportion and the waste itself can be reused in later reactors, and the risk for accidents is low, while the decreased air quality we get from burning gas or coal is causing more deaths every month than nuclear has during all of its operating time. It's a realistic alternative that actually produces energy to the demand as it does not have the intermittent nature of it, and I take the waste we get from it over all the emissions we get from Coal, Oil, Solar (because you need gas) and Wind (because you need gas) any day of the week. Or Hydro everywhere, but not everyone can be Norway.
  6. Variable cost as in what? Pretty sure that the transmission equipment (whether if you choose to use different wires, or build substations) is still a fixed cost, as it goes into the infrastructure. Yes, power demand affects on how the grid is built, but the grid is basically built to withstand the peak demand and at all other times the grid could be much smaller and still manage. In other words, the actual electricity consumed overall doesn't matter regarding infrastructure because the grid is scaled up to meet peak demand. The cost of electricity itself as consumed depends on what the costs of generating the energy itself are, be it coal, nuclear, solar (and gas) or wind (and gas). Unfortunately renewables aren't great in everyday grid use because they deliver energy in whatever way they wish and cannot be scaled up or down to meet demand. When building the grid, solar and wind can be essentially set at close to zero because there are times when they don't produce energy anywhere near to meet the demand. If efficient batteries or efficient ways to store energy other than hydro and as fuel was possible, I'd be very much glad to see how they would fare, but the energy storage would have to be quite massive considering how seasonal the renewable power sources are.
  7. I was initially confused, but now I think I got it. There is a very realistic reason for it if I got it correctly, but I'm confused as to why the billing can be that dumb to begin with. We have a fixed cost + usage based system, which actually reflects the costs. There are two parts for the operation of the grid. The grid infrastructure, and the power generation. The infrastructure is a fixed cost. The wires don't really wear out faster if you're using electricity all the time. Hence, this cost is a fixed cost. The power generation costs in your power bill should be billed by the actual use of electricity. Case A: Let's say the cost of the grid is $15 / month for a person. Then we add (easy unit for math, not for realism) $1 / UsageWatt bill for the electricity. A home without panels uses 10 UsageWatts per month. The final bill will be $15 + $10 = $25 / month. A home with solar panels uses 4 UsageWatts per month. The final bill will be $15 + $4 = $19 / month. This reflects the actual costs accumulated per user fairly well. Case B: Under another billing system, we only have UsageWatts for billing. In this system, the price of electricity is $2.5 / UsageWatt and no fixed costs. A home without solar panels still uses 10 UsageWatts per month, and the final bill is still 10 * $2.5 = $25 / month. A home with solar panels is still using 4 UsageWatts per month, and the final bill is 4 * $2.5 = $10 / month. In case A, the solar panel home saves $6 / month, and in case B, the savings are $15 / month, or a 150% increase in savings. In reality, in case B the solar panel home is paying less than they should if the billing was appropriate. Case C: Exactly what is going on here; the power company only bills consumption, but uses different UsageWatt rates for solar panel homes and non-solar panel homes. For normal homes the cost is $2.5 / UsageWatt, but for solar panel homes, the cost is increse to $5 / UsageWatt. Now the non-solar panel home is still billed for $25 / month... But the solar panel home is billed for $20 / month. This reflects the actual costs much better (just like case A) although it still isn't perfect. Case A is really how the electricity should be billed, if we were to bill by actual costs. The only problem with billing by actual cost is that the price of electricity itself can be relatively little of the total bill, and infrastructure costs can make the majority of your bill depending on the case, so it does not create an effective price incentive to actually save energy since you can only save so little. Electricity costs wary a lot depending on geographical locations. For power and water that is correct, because power needs to be generated, and water needs to be pumped. Both are finite. For communications, it doesn't actually make any sense. If the usage-rate is actually just the bandwidth you are paying for, then it is fine. Data caps, however, have no place in any kind of telecommunications network. The only data cap there should be is [time * the speed you paid for = maximum amount of data through the network]. Unless you are aware of some ground-breaking battery technology that nobody else is aware of, using batteries is nowhere near as efficient as using the grid. This is the primary problem with renewables. Using batteries can somewhat help you, but they are still mainly used for redundancy, and for a good reason. Economically such an investment doesn't make sense. Just for some perspective, the average American household uses 30 kWh of power each day. Tesla's PowerWall 2.0 costs $5500 and stores up to 14 kWh of electricity. Not to mention that batteries are less efficient than the grid. Not to mention that they aren't getting better as they age, but rather the opposite. Can't really blame you though, since there are plenty of people who have fundamental misconceptions what electricity is. Take this is from the blogger: Just for the record, PowerWall 2.0 is not designed to take you off the grid. It is only intended to give you more efficiency out of your solar panels, because even though the efficiency of batteries still suck compared to the grid, having solar panels generating electricity during the day when nobody's at home using the said electricity is even less efficient.
  8. A big thing about ultrawide which often goes unnoticed is that if you're going to upgrade from 1080p 16:9 to 1080p 21:9 or 1440p 16:9 to 1440p 21:9 monitor, then yes, ultrawide is going to be "wider" and you gain more space in the horizontal direction. But that's not all there is to it. The comparison is a little disingenuous when it comes to the talks with ultrawide 21:9 1440p / 1600p to a proper 4k display. Yes, the 34" 21:9 are going to be "larger" in horizontal direction if you compare it to 27" 4K display, but the PPI is completely different. If you compared to monitors at equal PPI, the 4K display is a ~40" display. And at that point the display is larger in both vertical and horizontal direction. Hell, even the resolution says it, 3840 x 2160 display is larger in all directions than a 3440 x 1440 display. That's why I'm sort of disappointed with how i.e HardwareCanucks put it in their "comparison video"... it was essentially a comparison between two different monitors, not an accurate explanation between a 4K 16:9 and 1440p 21:9 monitor. You can also run 16:9 and letterbox it as a 21:9 display. The borders are usually annoying but at such display sizes, when you are using a single monitor, it doesn't really have the same impact when you're looking at an already-small display and then having to put black borders on it. You'll forget the borders are even there when running a custom resolution. The viewable area after letterboxing a 40" 4K is still larger than letterboxing a 34" 3440 x 1440 ultrawide. Really the key advantage to ultrawides is that the display selection right now is more rich. If you're not going to need adaptive sync and higher refresh rates, you're more than likely to actually favor picking a larger 4K display over a 21:9 display, it'll be equally expensive and you get more versatility out of the 4K display, as you can run it in a custom resolution anyway. And for desktop use, even if the display was too large, you can still tile a display appropriately to make the most use out of the space that you have. For actual, competitive gaming you're still just going to want a16:9 24" display with ULMB and any sync will not matter. Sync will not matter because you're always going to run the games at maximum framerates. 24" display is ideal because the larger the display gets, the less you can actually see at a glance, and most competitive titles limit the amount of viewable surface to a standardized maxmium, 4k etc. displays only get a sharper image, not more of it. So for entertainment purposes or as a "gaming" display ultrawide is alright, for productivity a similar PPI 4K display is going to beat it. For movies, it depends, though I would take the 4K because letterboxing to 21:9 is much better than occasionally having black bars to watch 16:9 content on a 21:9 display. Your display should always cater to your needs. Entertainment gaming competently, but not competitively? Ultrawide. Productivity? 40" 4K. Competitive gaming? 24" 120Hz++ ULMB display. Professional color? Save for HP DreamColor display or something.
  9. I would assume that they're going to replace it with a model with a better freesync range.
  10. They do it because they can. I remember reading a post from someone who was at EA and he was told that the average career length of a developer is 5 years. There are more than enough people graduating and just wanting to work in the games industry no matter what it takes. It doesn't take long until people who had high hopes see the industry for what it is, and take their leave. Especially after graduating, if you don't have too much going on with your life, you may even want to work for longer days. Maybe it is your passion and you really want to get better at it. But if you are constantly "crunching" and not seeing it in your salary - if you're not compensated for overtime, or if you are not given a part of the revenue/profit after a reasonable return of capital is achieved - then you are being taken an advantage of, and are nothing more than a sucker. If you want to work more but your employee is not paying for more than what your base salary, then do the hours in your contract and then do your own stuff during the free time to further improve your skills. Developers do need an union. With the increased bargaining power they could forge contracts where their part of the work could be recognized and industry issues like long crunch periods would be addressed. Sadly, in the U.S, where this is the biggest problem, unions are seen as another evil and it doesn't really mesh with the culture of rather extreme individualism. Unions do sometimes end up protecting the old people already in the system, and sometimes demand far too much than what is healthy for the industry, but, in general, they are blamed for things that are not exactly their fault and they do a lot of good things, too. People think that Henry Ford raised salaries to make sure that people could buy his cars, when in fact, he did so that he wouldn't have to deal with unions. Unions do help indirectly as well.
  11. Blade 14. For checking e-mails, course work, surfing on the net, and other light use, I already have a cheapo tablet that kind of manages it all already. But I don't have anything mobile with GTX 1060M on it, and I would also have one less excuse to play games somewhere else than just over the internet...
  12. Everyone who thinks that Tim is on the path of tinfoil-hattery should at least understand the point he is trying to make. He is saying that the current direction gives Microsoft all the rope they need to hang the developers if they choose to do so. Microsoft's market position is so strong that we could metaphorically say that it makes Microsoft put a chain under us, on which we stand on. Now, with these mandatory new update policies, UWP etc. Microsoft puts a rope around our neck. Now, Microsoft is naturally in a very strong negotiation position and if they don't get what they want, they could just kick the chair from under us. Whether if Microsoft will ever kick that chair or not, is not really the question here. Similar things have happened before, though - Microsoft fear-mongered about OpenGL support when they launched Vista, saying that OpenGL -commands would be turned into DirectX equivalents (causing a slowdown) and pushed the Xbox, creating a situation where people would rather program for DirectX instead of OpenGL. Microsoft also pulled something similar with ActiveX during the browser wars. But that is not what Sweeney's key argument is. Sweeney's argument is that the chair (monopolistic market position) is already pretty powerful; considering that we should definitely not be allowing Microsoft to tie a rope around our necks. I think it is a perfectly reasonable proposition. Whether if the actual kick would ever happen or not is another thing, just like possessing nuclear weapons doesn't mean that you're actually using them.
  13. Double precision without EEC memory is pretty damn useless. All it was good for was 3D-rendering and such with GPUs and the fact that it released a good time earlier than the actual gaming-level cards like GTX 780 and 780 Ti. Even then it was still somewhat useful for 3D-rendering because nVidia simply stopped releasing the double VRAM versions of their cards like they used to - yes GTX 780 had a 6GB version, but they were also pretty heftily priced and they only came out at the end of the whole chip's lifecycle. It was marketed for prosumers.... but the only thing useful was the increased VRAM. Same thing with Titan X except it was now completely marketed for gamers and there was no prosumer angle to it. Kepler was stronger than Maxwell at computing, too.
  14. It's actually pretty simple to see, and I can completely understand FM's stance, though it probably isn't good enough of a benchmark to focus on Async. We see bigger differences in both Hitman and DOOM and that stands for a reason, but FM's stance is more that they want to see how the average game is going to use DX12 and it seems that they're looking more closely on titles like Tomb Raider because they believe that it is closer to the kind of generic situation that you'll find out there when DX12 are only starting to roll out. So FM is trying to have their benchmark to be sort of generic predictor of future game performance rather than just another glorified tech demo that uses experimental features which never actually get used in any real games. Also further on, on Anandtech forums there was someone from FM commenting that they also have to keep the average consumer in mind when they're doing their benchmarks, because if the consumer's hardware cannot run it ( as in at all ) they'll get complaints for that. For PC-developers not focusing heavily on Async makes a lot of sense. Not everyone has a full idTech team doing their best to optimize their game to the max, so the developers end up min-maxing in the best possible ways available to them. Especially on PC it makes sense to skip Async especially on titles that are just about to release because frankly, only AMD hardware does it, recent nVidia hardware only sort of does it, and older hardware in practical terms doesn't do it at all. So when nVidia, and their 1-2 gen old cards, probably make >50% of the whole market it doesn't make sense for PC devs to focus on Async as a vital feature. That said, the elephant in the room is the consoles. DOOM wouldn't be focusing so heavily on Async if the game was only released on PC. Because potatoboxes have AMD hardware with Async compute capabilities, they had it on the consoles and brought it to PC. If console titles adopt Async compute in a heavy way, then those benefits should also come to the PC. But again, FM's view was said on PCPerspective video - they think it should be roughly accurate for majority of the titles for the next 1-3 years. As in, more Tomb Raider.
  15. I wouldn't buy a Fury X at this point. It's a competent card but 4GB VRAM holds it back at times. Newegg had it $399 a while ago but I guess that batch sold out. I wouldn't pay for a G-sync display. Few dollars is the approximated value of the Gsync-module - nVidia is making a bank selling those whereas they could just support a displayport standard. Or hell, support both - but we know what consumers would decide on... I would honestly wait for the HBM 2.0 cards - big pascal and vega - you already have an r9 290, and with a freesync display, it should do a good job for a while. But of course, how much you value your money depends on how much disposable income you actually have.
  16. My current R9 290 needs either increased volts or decreased memory clocks to run properly - replacement might be imminent any time now. During the summer I definitely don't need more heat sources in my room, so switching to a Polaris chip would help in that regard.
  17. Dunno... People took the benchmarks far too positively and they weren't official. There wasn't official communication beyond saying that it is a card for VR (hits that GTX 970 / R9 290 territory) and that it would be more power efficient. From the official communication you could say that only the power efficiency is a letdown in the light of what nVidia is capable of. Far too many other promises fell flat with the Fury X imho, but it's just AMD's story at the moment - they're relatively behind, and their ownership of the console market and performance through APIs is yet to catch wind under the wings. That said historically, when nVidia was in the same position - GTX 480 was supposed to be the ATI killer. Came 6 months late, and even then it was still a bad buy except for people who would watercool their cards without a question and are only performance aware regardless of price. Pretty sure whatever the gigantic 14 nm cards will be, they will with HBM2 and much more silicon outperform GTX 1080 pretty easily though... especially since they will be aimed for 4K and high-performance VR. Otherwise it is just a waste of all that bandwidth.
  18. No point in upgrading... it's a sidegrade. If you had nothing then RX480 would be better, if you already have 390, keep that. Also multi-GPU is in sort of a limbo right now. Many titles don't just flat-out support it. Even UE4 uses deferred rendering that doesn't support it. VR can be a good bet since you're essentially using two screens. DX12 should enable arbitrary multi-GPU combinations but only Ashes is currently doing that afaik. Right now, unless you're keen on getting higher performance in VR, not really.
  19. TressFX was made open source pretty quickly and nVidia could optimize their solution. To be honest, a hardware vendor meddling with middleware brings a lot of issues. Especially if they pay to developers for using it. There are too many incentives to make it perform badly on competitor's cards. The only time a hardware vendor should be making middleware is imho when hardware is able to bring out actually new capabilities that couldn't be done before (hence no software for it) and even then the developers and others alike should be able to optimize for the solution. It also creates incentives to make the "stock" solution automatically sub-par when you can toggle hardware-vendor specific eye-candy up and have a selling point that way. The GTX 970 was indeed a 3.5GB card or effectively so. Yes, there is 4 GB on the card, but the last .5 GB is pretty much your equivalent of 512 MB of DDR2 on some old GT 320. Some people tested it with Blender and it slowed down significantly after 3.5 GB, to the point where you'd rather render with a CPU. The biggest condemning factor for me though is that nVidia only came to light with it after users had found out about it. nVidia doesn't gimp their old cards but neither do they care about them all too much after a new series launches. They were never built to last, whereas AMD has been using the same architecture since HD7000 -days and many current improvements can be utilized further down the line. Just looking at the relative performance of Kepler cards signals this. nVidia's advantage has heavily relied on drivers and optimizations in the dx 11 environment, so once the older series actually drops out, you're going to see performance degradations in newer titles. I'm not saying that nVidia can't make great hardware - they certainly can, but they want to keep the costs as low as possible. It makes sense for nVidia to do business like this; less hardware means less heat; good for OEMs and laptops, while less hardware also brings costs per unit down and they sell the majority of the cards. If nVidia really wanted GTX 1080 to perform even better they could have done that - the card doesn't push the power limitations, but they don't have to. And besides, the stock would be even more anemic than what it already is. The PCI-E thing is probably not a large issue but if it stresses anyone, they should wait. As for the... history of these things, if we had an epidemic of broken PCI-E slots and motherboards, you would've heard of it by now. As for RX 480 itself, it sort of let me down - performance was slightly below expectations, but more importantly, the perf / watt isn't that great compared to competition. GTX 1070 is in a different performance league altogether yet has identical power consumption. Fury does indeed beat the 980, Fury X rivals 980 Ti in some niche situations but even I have stated in numerous occasions that 980 Ti is the better card out of the two. Regardless... I wouldn't pick the Fury unless you can find an used one for a good price or something.
  20. "Learn Python The Hard Way", I've heard a lot of positive stuff about that one.
  21. G600 for productivity-related uses, it can't be beat.
  22. if(yo==A[0][0]==A[0][1]==A[0][1] That last one should be A[0][2] I believe. Thing is, true / false is turned into an integer, and true is something that is not (since that is for false) 0, and is usually 1. So when you do y0 == A[0][0] and that returns true, it becomes 1. Then you are NOT comparing y0 or (A[0][0] for that matter) to A[0][1], but instead, you are comparing 1 == A[0][1]. So what you should instead do is to use the && operator, as in, if (yo==A[0][0] && yo==A[0][1] && yo==A[0][2]) and then the results start making more sense. What they said about the code is true though, putting everything inside int main() doesn't get you anywhere in the long run, and using more descriptive variable names isn't a problem with autofill - unless going for the minimum possible charcount is the end goal.
  23. My card has garbage ASIC (~60%) and it responded to voltages pretty well. From what I've heard is that if you can keep the temps in check, overvolting is relatively harmless since the limitations are pretty tight anyhow, so unless you custom bios you can't really start degrading your GPU like you can with a CPU. I managed +200 mV / 1225 / 1600, but while it managed 3D firestrike perfectly, it crashed in about an hour of gameplay that wasn't even that intense. Took it back to stock for now (1030/1400). From what I've heard is that after ~1080-1100 or so the performance gains don't really show up that highly and overclocking beyond that doesn't yield that high gains in the actual performance.
×