Jump to content

Zulkkis

Member
  • Posts

    312
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

636 profile views
  1. The cards are virtually in the same ballpark of performance so if it wasn't for the drivers you probably wouldn't even know which one you had running in your system.
  2. Caseking.de EK directly Alphacool directly Cablemods (pricy...) direct eBay
  3. Fractal Define C Corsair 400C Compact Lian Li PC | -J60 | -A05FN | -A05FNB | -A55 | -A51 | -A56 | -TU300
  4. Have you actually ever tried to rotate the case back into the normal orientation? From what I've heard is that natural convention isn't really a thing in a PC case because the fans are much stronger than that. I have a feeling that the case's air cooling performance has next to nothing to do with the motherboard orientation and net to everything to do with using extra-thick 180 mm fans to pull the air in. It could be interesting if there was a case that actually had a standard orientation with the same intake setup and you wouldn't have to have all the issues with a cumbersome I/O access. Could be even a case project at some point.
  5. We are getting much more efficient with energy all the time, but the problem is, that we find all these new uses for electricity, so the total consumption actually goes up. One essential part around the world is construction quality of homes (lack of multi-layer windows and other stuff that should be a standard), but even there the gains aren't going to change the trend. Distributed generation means generally smaller plants but wind and solar aren't going to make the infrastructure costs go down, rather, they shoot it up. If the wind is not blowing, you have unusable capacity. That capacity has to then be compensated somehow or you have to start taking some drastic measures to ensure that at least a part of the grid gets electricity that matches the standards, for going significantly over or under the regulated frequency is going to break things. Can you imagine i.e a restaurant just shutting down at a certain time, or a movie theater? How exactly would you realistically do this? Data shows that human well-being goes up with electricity usage. More and more parts of the world are joining that fray. Equity concerns are huge, even if a system was created where the bill is proportional to income it would be gamed with. Yes, but battery technology improvements are minimal and we don't even have promising theoretical models that could lead to a breakthrough. As it stands, renewable energy par Hydro does not bode well for grid usage. Averaging out the power doesn't really work; if power does not meet the demand, production needs to ramp up (to which the grid then needs to be scaled for as well as the production) or demand needs to go down - which essentially means cutting a part of the grid out altogether. That's why some factories have contracts where the power company will actually pay the factory to shut down production if needs be. A smart grid enables the use of distributed electricity problem, but it does not get rid of the issues related to the intermittent nature of it. Only good, long-term storage could, because the power is more or less seasonal. Renewables have a place, for example in areas that are completely off the grid, and getting the grid wired would cost more than the installation of a system that would meet the rather small power demands. I was working in a business that sold solar panels among other things, and one place where it made sense was summer cottages. Getting the wire would have cost too much, but summers are bright and heating is not needed. So, with a couple of small LED lights, a recharge point for mobile devices, and a DC fridge along with batteries and a couple of panels you could even keep the system running. It wasn't cheap, but in many cases, cheaper than the grid. The whole idea starts falling down when people would start looking it as a permanent location, because especially winter would cause a lot of issues. If it was up to me, nuclear would be built up much faster to address the demand for electricity. The waste issue is blown out of proportion and the waste itself can be reused in later reactors, and the risk for accidents is low, while the decreased air quality we get from burning gas or coal is causing more deaths every month than nuclear has during all of its operating time. It's a realistic alternative that actually produces energy to the demand as it does not have the intermittent nature of it, and I take the waste we get from it over all the emissions we get from Coal, Oil, Solar (because you need gas) and Wind (because you need gas) any day of the week. Or Hydro everywhere, but not everyone can be Norway.
  6. Variable cost as in what? Pretty sure that the transmission equipment (whether if you choose to use different wires, or build substations) is still a fixed cost, as it goes into the infrastructure. Yes, power demand affects on how the grid is built, but the grid is basically built to withstand the peak demand and at all other times the grid could be much smaller and still manage. In other words, the actual electricity consumed overall doesn't matter regarding infrastructure because the grid is scaled up to meet peak demand. The cost of electricity itself as consumed depends on what the costs of generating the energy itself are, be it coal, nuclear, solar (and gas) or wind (and gas). Unfortunately renewables aren't great in everyday grid use because they deliver energy in whatever way they wish and cannot be scaled up or down to meet demand. When building the grid, solar and wind can be essentially set at close to zero because there are times when they don't produce energy anywhere near to meet the demand. If efficient batteries or efficient ways to store energy other than hydro and as fuel was possible, I'd be very much glad to see how they would fare, but the energy storage would have to be quite massive considering how seasonal the renewable power sources are.
  7. I was initially confused, but now I think I got it. There is a very realistic reason for it if I got it correctly, but I'm confused as to why the billing can be that dumb to begin with. We have a fixed cost + usage based system, which actually reflects the costs. There are two parts for the operation of the grid. The grid infrastructure, and the power generation. The infrastructure is a fixed cost. The wires don't really wear out faster if you're using electricity all the time. Hence, this cost is a fixed cost. The power generation costs in your power bill should be billed by the actual use of electricity. Case A: Let's say the cost of the grid is $15 / month for a person. Then we add (easy unit for math, not for realism) $1 / UsageWatt bill for the electricity. A home without panels uses 10 UsageWatts per month. The final bill will be $15 + $10 = $25 / month. A home with solar panels uses 4 UsageWatts per month. The final bill will be $15 + $4 = $19 / month. This reflects the actual costs accumulated per user fairly well. Case B: Under another billing system, we only have UsageWatts for billing. In this system, the price of electricity is $2.5 / UsageWatt and no fixed costs. A home without solar panels still uses 10 UsageWatts per month, and the final bill is still 10 * $2.5 = $25 / month. A home with solar panels is still using 4 UsageWatts per month, and the final bill is 4 * $2.5 = $10 / month. In case A, the solar panel home saves $6 / month, and in case B, the savings are $15 / month, or a 150% increase in savings. In reality, in case B the solar panel home is paying less than they should if the billing was appropriate. Case C: Exactly what is going on here; the power company only bills consumption, but uses different UsageWatt rates for solar panel homes and non-solar panel homes. For normal homes the cost is $2.5 / UsageWatt, but for solar panel homes, the cost is increse to $5 / UsageWatt. Now the non-solar panel home is still billed for $25 / month... But the solar panel home is billed for $20 / month. This reflects the actual costs much better (just like case A) although it still isn't perfect. Case A is really how the electricity should be billed, if we were to bill by actual costs. The only problem with billing by actual cost is that the price of electricity itself can be relatively little of the total bill, and infrastructure costs can make the majority of your bill depending on the case, so it does not create an effective price incentive to actually save energy since you can only save so little. Electricity costs wary a lot depending on geographical locations. For power and water that is correct, because power needs to be generated, and water needs to be pumped. Both are finite. For communications, it doesn't actually make any sense. If the usage-rate is actually just the bandwidth you are paying for, then it is fine. Data caps, however, have no place in any kind of telecommunications network. The only data cap there should be is [time * the speed you paid for = maximum amount of data through the network]. Unless you are aware of some ground-breaking battery technology that nobody else is aware of, using batteries is nowhere near as efficient as using the grid. This is the primary problem with renewables. Using batteries can somewhat help you, but they are still mainly used for redundancy, and for a good reason. Economically such an investment doesn't make sense. Just for some perspective, the average American household uses 30 kWh of power each day. Tesla's PowerWall 2.0 costs $5500 and stores up to 14 kWh of electricity. Not to mention that batteries are less efficient than the grid. Not to mention that they aren't getting better as they age, but rather the opposite. Can't really blame you though, since there are plenty of people who have fundamental misconceptions what electricity is. Take this is from the blogger: Just for the record, PowerWall 2.0 is not designed to take you off the grid. It is only intended to give you more efficiency out of your solar panels, because even though the efficiency of batteries still suck compared to the grid, having solar panels generating electricity during the day when nobody's at home using the said electricity is even less efficient.
  8. A big thing about ultrawide which often goes unnoticed is that if you're going to upgrade from 1080p 16:9 to 1080p 21:9 or 1440p 16:9 to 1440p 21:9 monitor, then yes, ultrawide is going to be "wider" and you gain more space in the horizontal direction. But that's not all there is to it. The comparison is a little disingenuous when it comes to the talks with ultrawide 21:9 1440p / 1600p to a proper 4k display. Yes, the 34" 21:9 are going to be "larger" in horizontal direction if you compare it to 27" 4K display, but the PPI is completely different. If you compared to monitors at equal PPI, the 4K display is a ~40" display. And at that point the display is larger in both vertical and horizontal direction. Hell, even the resolution says it, 3840 x 2160 display is larger in all directions than a 3440 x 1440 display. That's why I'm sort of disappointed with how i.e HardwareCanucks put it in their "comparison video"... it was essentially a comparison between two different monitors, not an accurate explanation between a 4K 16:9 and 1440p 21:9 monitor. You can also run 16:9 and letterbox it as a 21:9 display. The borders are usually annoying but at such display sizes, when you are using a single monitor, it doesn't really have the same impact when you're looking at an already-small display and then having to put black borders on it. You'll forget the borders are even there when running a custom resolution. The viewable area after letterboxing a 40" 4K is still larger than letterboxing a 34" 3440 x 1440 ultrawide. Really the key advantage to ultrawides is that the display selection right now is more rich. If you're not going to need adaptive sync and higher refresh rates, you're more than likely to actually favor picking a larger 4K display over a 21:9 display, it'll be equally expensive and you get more versatility out of the 4K display, as you can run it in a custom resolution anyway. And for desktop use, even if the display was too large, you can still tile a display appropriately to make the most use out of the space that you have. For actual, competitive gaming you're still just going to want a16:9 24" display with ULMB and any sync will not matter. Sync will not matter because you're always going to run the games at maximum framerates. 24" display is ideal because the larger the display gets, the less you can actually see at a glance, and most competitive titles limit the amount of viewable surface to a standardized maxmium, 4k etc. displays only get a sharper image, not more of it. So for entertainment purposes or as a "gaming" display ultrawide is alright, for productivity a similar PPI 4K display is going to beat it. For movies, it depends, though I would take the 4K because letterboxing to 21:9 is much better than occasionally having black bars to watch 16:9 content on a 21:9 display. Your display should always cater to your needs. Entertainment gaming competently, but not competitively? Ultrawide. Productivity? 40" 4K. Competitive gaming? 24" 120Hz++ ULMB display. Professional color? Save for HP DreamColor display or something.
  9. I would assume that they're going to replace it with a model with a better freesync range.
  10. They do it because they can. I remember reading a post from someone who was at EA and he was told that the average career length of a developer is 5 years. There are more than enough people graduating and just wanting to work in the games industry no matter what it takes. It doesn't take long until people who had high hopes see the industry for what it is, and take their leave. Especially after graduating, if you don't have too much going on with your life, you may even want to work for longer days. Maybe it is your passion and you really want to get better at it. But if you are constantly "crunching" and not seeing it in your salary - if you're not compensated for overtime, or if you are not given a part of the revenue/profit after a reasonable return of capital is achieved - then you are being taken an advantage of, and are nothing more than a sucker. If you want to work more but your employee is not paying for more than what your base salary, then do the hours in your contract and then do your own stuff during the free time to further improve your skills. Developers do need an union. With the increased bargaining power they could forge contracts where their part of the work could be recognized and industry issues like long crunch periods would be addressed. Sadly, in the U.S, where this is the biggest problem, unions are seen as another evil and it doesn't really mesh with the culture of rather extreme individualism. Unions do sometimes end up protecting the old people already in the system, and sometimes demand far too much than what is healthy for the industry, but, in general, they are blamed for things that are not exactly their fault and they do a lot of good things, too. People think that Henry Ford raised salaries to make sure that people could buy his cars, when in fact, he did so that he wouldn't have to deal with unions. Unions do help indirectly as well.
  11. Blade 14. For checking e-mails, course work, surfing on the net, and other light use, I already have a cheapo tablet that kind of manages it all already. But I don't have anything mobile with GTX 1060M on it, and I would also have one less excuse to play games somewhere else than just over the internet...
  12. Everyone who thinks that Tim is on the path of tinfoil-hattery should at least understand the point he is trying to make. He is saying that the current direction gives Microsoft all the rope they need to hang the developers if they choose to do so. Microsoft's market position is so strong that we could metaphorically say that it makes Microsoft put a chain under us, on which we stand on. Now, with these mandatory new update policies, UWP etc. Microsoft puts a rope around our neck. Now, Microsoft is naturally in a very strong negotiation position and if they don't get what they want, they could just kick the chair from under us. Whether if Microsoft will ever kick that chair or not, is not really the question here. Similar things have happened before, though - Microsoft fear-mongered about OpenGL support when they launched Vista, saying that OpenGL -commands would be turned into DirectX equivalents (causing a slowdown) and pushed the Xbox, creating a situation where people would rather program for DirectX instead of OpenGL. Microsoft also pulled something similar with ActiveX during the browser wars. But that is not what Sweeney's key argument is. Sweeney's argument is that the chair (monopolistic market position) is already pretty powerful; considering that we should definitely not be allowing Microsoft to tie a rope around our necks. I think it is a perfectly reasonable proposition. Whether if the actual kick would ever happen or not is another thing, just like possessing nuclear weapons doesn't mean that you're actually using them.
  13. Double precision without EEC memory is pretty damn useless. All it was good for was 3D-rendering and such with GPUs and the fact that it released a good time earlier than the actual gaming-level cards like GTX 780 and 780 Ti. Even then it was still somewhat useful for 3D-rendering because nVidia simply stopped releasing the double VRAM versions of their cards like they used to - yes GTX 780 had a 6GB version, but they were also pretty heftily priced and they only came out at the end of the whole chip's lifecycle. It was marketed for prosumers.... but the only thing useful was the increased VRAM. Same thing with Titan X except it was now completely marketed for gamers and there was no prosumer angle to it. Kepler was stronger than Maxwell at computing, too.
×