Jump to content

Arreat

Member
  • Posts

    35
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Gender
    Not Telling
  1. People like to form opinions from clickbait articles. I usually don't comment, but I just can't help myself here. No one anywhere is mentioning that the game is made using the old code . The old code used was infused with nvidia tech back then. They probably did a hasty job of converting it to the new stuff . In this haste they didn't change a function that swaps the implementation of HBAO for normal SSAO on some AMD cards (that should happen on all of them .) Why they didn't have an option , I don't know , but you can do it automatically . Second is even more important. The old Unreal 3 on the last days of its development was leaning very heavily on PhysX as it's physics engine. So much so that the new UE4 uses physx entirely for the physics. In fact most of you on AMD hardware have already run games perfectly fine with physx in the back end. Just really wanted to drop that off my chest , and I probably will not answear more. Have fun discussing.
  2. it is absolutely what happened , most big developers that get thouse engines re-write or add their own stuff . And even then , it is a feat worth mentioning . Ok now , that is very very disputable and your opinion . Fact is the thing is very old , and usually runs on a large variety of machines very well , it is very scaleable . For it to run so bad , the devs messed up really bad somewhere .
  3. What baffles me , is this game runs on the Unreal 3 engine . The real question here is not why didn't they optimise , but HOW did they manage to make that engine run badly on even mid range hardware . I'm sorry unless they somehow mistakenly released the Xbox One/PS4 build for PC , I cannot fathom how is the game not running well . Remember , the old batmans ran the same engine , with physX and every other bell and whistle of nvidia , and still getting 100+ frames with beefy then , mid range now hardware .
  4. Yeah , nothing really new here , if we look for some old benchmarks of the 290X , it is almost the same thing . Also don't be fooled by the modest core overclock of 50hz , the 1ghz memory overclock does show , just like it did between GTX 680 - 770 . It beating the 980 is not really that surprising , as we know the 980's and 970's have a crazy low reference clock , Non reference designs increase it by a monster 250mhz in some cases . I whant to see the overclocking results first , but what I think is , the 290x is still the card to recommend . With an aftermarket cooler you are almost guarnteed to get to the same level of perf for less money .
  5. I didn't say that 4xMSAA is needed at 4k , but that Techpowerup example uses it . And we don't know the setup of the AMD FarCry test . If they used FXAA , it is absolutely possible to have a disparity of 50 avg vs 35 avg , even though at the same conditions they will actually be extremely close . What Im saing is , before eveyone goes crazy for these results , lets see some reviews and benchmarking . If these results are true , and with that price it would almost be too good to be true .
  6. This look very nice , but lets focus on the small star next to the Ultra* settings , there is something there , we don't know what ultra in this case is and what is the level of AA . The benchmark from techpowerup for the reference is , as they always say 4xMSAA if the game has the option , yes even in 4k . So that might not be the perfect perspective .
  7. So much paranoia over DX12 compatibility . I guess , I can't blame people , it's a big update with lots of nice features , and we never before had such a big update without the need of new GPU's . Yeah sure some cards wont have some features . But thouse are incredibly minor features . Everything that is big about DX12 , will be enjoyed by GCN, architecture cards , and Kepler and above for SURE . The absence of some minor stuff will be in theory only . No one will actually notice it . Performance imporvement , the combined memory and different cards in sli/crossfire , will still be there . Let's not forget one thing , devs were making DX_9 compatible games just until a year ago . Do you honestly think they will just make games for the "highest" level of DX12 ... thats crazy . They are all picking DX12 , only because it has such large backwards compatibility . In the end , companies will use everything they get to advertise , they will grab this small insignificant detail , and blow it way way out of proportion , thats the way to sell new cards after all .
  8. That could be the way they do it , it makes sense . The question then is how much memory in multi GPU configurations will be gained , since if the tiles are smaller , then even more textures and resources will have to be duplicated on both gpu's . I was going very briefly over the documentation of dx12 , regarding resources , it often mentions "heaps" and "pages" , they will serve as containers for multiple resources . Maybe GPU's will be able to adress parts of that heap , reducing the amount of data it will need to store in vram ? I'm just speculating , I have just some surface understanding on graphical api's .
  9. All seems very promissing . I wonder how will this split rendering work without tearing (considering it should also work without vsync) , probably the gpu's will have to be synced atleast to eachother in dirver . Also the double frame buffer seems missleading , this will absolutely increase the memory , but not two times . You will always have textures that repeat on both "sides" that will need to be loaded in both GPU's , how much memory you get will actually depend on the game , scene , and size of assets . Still interesting though , some more memory is better than what we have now . About the dev support , I'm not worried in the least , So many big engines , and big studios are working on implementing DX12 , and talking about near future games that will use it , that we will probably see the fastest DX adoption ever .
  10. The Ad actually sais that THIS will be the card that brings 4k to the 99% of gamers , and if you are tired of 1080p this will be it . And 4gb could prove to be absolutely enough , you havent seen HBM in action yet . Hell even 4gb GDDR5 cards are doing fine today ... just see the 295X2 vs Titan X , the former beats it in every 4k benchmark , yet it has only 4gb per GPU .
  11. Well a month ago I bought a EVGA 970 FTW , unfortunetly it had a big coil whine issue . It was unbearable even with fps locked to very low amounts . I changed it up for a G1 , and I couldn't be happier . Not only there is no coil whine , but this card is a beast overclocker and runs atleast 5-6 degrees cooler . I am running it right now at 1515 clock , not even touching the voltage . There is only 1 problem with the G1 and that is the idle fan speeds . By default the fans are running too high I think , but I fixed it pretty easy with the tutorial on this forum (and linus's video) that shows you how to get fanless mode on any 970. (I didnt set the fans to turn off but lowered the minimum speed at idle) .
  12. Thats just it , I don't think anyone is holding on to textures or high res shadow maps (thouse really can be whatever size you whant them ) . I am guessing what was cut out , was - better lighting , very high poly models , very dense particle effects , stuff like that , why don't they release them ? Well they probably created them just for that one video , this means they are unfinished . And , yes Watch Dogs had some options "hidden" and it was stupid . But thouse effects that I personally tryied , just did not make the game look like the release video , they were slightly better , but not nearly as good as that "vertical slice" . I can't imagine that's the case here . There is only one thing I can't excuse, view distance . Although we shall see when they release it . But that should just be left to the player to adjust , it would not consume dev time and is inexcusable if left as in the console version . The rest of the stuff , there is very little that requires little to no time unfortunetly . About the rest , well I said it already . I would rather have games run great on hardware today . It's not like I whant triple-A titles to run maxed on a mid range card . I like to revisit games yes , but not really for that reason . As I said resolution and high amounts of AA , would still have the same effect you desire . Say in 2 years you buy the greatest GTX 1180 that delivers 2 times the performance of Titan X . Then you should probably get a 4k or 5k or whatever screen is fashionable , slap ontop a Super Sampling AA + Tempral AA and run it at 120 fps . You see for the enthusiast there is no shortage of - make this game look amazing and justify the large amounts of money I spend on my hobby . For the rest though , games should really stick to running great on what we have now .
  13. First off , the VXGI example is not completely accurate since that would require pretty much making 2 games at the same time , just for an option . But lets say it is possible , should they do it ? Honestly I wouldn't . Let me tell you why . I love graphics , I love creating the best looking thing , I am one of thouse people that just looks at every single pixel in a good looking game and wonder , how did they do it . But if the thing doesn't run good , whats the point . You can make photorealistic graphics , but if it can't run on anything ... In my opinion, you shouldn't make a game like that . The best target is , the single best gpu on the market right now , on the standart resolution for the moment , with 60fps , maybe if you have to ,with very mild AA like FXAA . The scalability should be there , lowering options should make the game run great on lower hardware , and Increasing resolutions and AA should scale good with multi-gpu combos . And about the pushing of technology . That never stopped , harware is going forward . Why do you think this game will not push it forward . What about 4k ? , Yes the game will run good on 1080p , probably you wont be able to max it with a single GPU in 4k . So , there is your progress , thats what we are going for . And after we will have a new target . The question is , why does a game need to run horrible on high end components on a standart res , to push things forward . Actually , you obviusly do not disagree with me . Maybe I didnt express my toughts right . The game SHOULD be able to run maxed on a high end hardware . Unless you consider a 980 a mid range rig . In wich case , I envy you abit . And yes the game absolutely should be scaleable , thats the point , If your game runs at 1080p 60 fps maxed on say a 980 (or 780ti) it should have enough settings to run on lower hardware with variable fps or resoultion or settings , changing per users preference AND run with high - maxed settings on higher resolutions , higher AA levels , with more than one high end GPU . What I was trying to say is , if they target a super high end rig , say 2x980 for 1080p 60fps . Everyone will cry , that game is not optimized , since then the game will run at what 40ish on a single 980 , or not to mention you would need to have 2-3 titan x-s to run it high in 4k . What Im trying to say is , the higher the hardware you target at the top , the worse the experience for the people with lower end hardware , and that's most people . Now to make it clear , If they really downgrade severely from the trailer , thats bullsh*t and they shouldn't get a free pass . This is still liying to your fan base . The point I am defending is , first the game from what I've seen is looking great . Second , games should not be targeted solely on the small minority of people that get 2-3 of the most powerful gpu's each year .
  14. Damn , reading posts here , someone would think the game now looks like a 16bit platformer from 91 . The hard truth is most of the people complaining will probably not be able to max the game as it is . If they whent the other way and overbuild the PC version we would have had a new AC:Unity . Where everything below a 680 level is cut out and even what is considered high end would not be able to max the game with 60 fps . While I work on my projects , I always consider this - it should be able to run at 60 fps 1080p on the most powerful card that is out at about mid production . So say the Witcher 3 started production 3 years ago , so about a year ago , the titan or 780ti was the fastest card available . So if the game can't run on it with 60 fps , then a downgrade is necessary . And before you go up in arms , and be like crysis , witcher 2 , blah blah ... a few generations in the future , we will be able to max them . Thats bullsh*t , I whant my games to run great when they come out , I don't know why people decided that is OK for games like crysis and witcher series to require hardware that does not exist yet . Oh , and SLI and Crossfire , this should NEVER be considered a viable option to max the game at a standart resolution . Multiple high end GPU's from a dev's prespective should always be for ultra high resolutions , resolutions that are more expensive to run so people buy more expensive hardware . In the end I am not happy they downgraded , but it was really not that much , the game still looks freaking amazing . And I am super excited to sink what little free hours I have into it .
×