Jump to content

SLI Scaling solved ?

Bear in mind that all cards would need equal amount of information. So VRAM still cannot "pool together" as you say. When video cards render frames, they utilize the same textures as the other cards.

That might be right around the corner since AMD did release that bit of information last week saying that they had finally found a way through DX12 to make a combined VRAM pool.

Link to comment
Share on other sites

Link to post
Share on other sites

But each card has a different texture set, different part of the frame. This is where the CPU kicks in to do the trick.

Quoting would be much more convenient if you have the ability to do so. Straight up responding to a thread isn't usually enough - most people don't follow topics.

 

What about games where the CPU happens to be stressed already? What then? Are your graphics cards now bottlenecked?

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

That might be right around the corner since AMD did release that bit of information last week saying that they had finally found a way through DX12 to make a combined VRAM pool.

I would like a citation if you can find it.

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

That may be true I would be interested to see if it could work and would be really happy if it did. I just have this sinking feeling that it would cause everyone to have to buy a card 1 tier higher to get the full performance of their target tier. So if your looking to get full performance out of a triple 970 config you would have to buy triple 980s to get that full 970 configs theoretical performance. Im sorry if it sounds like im being pessimistic and shooting down your idea I am just making the point of what equalizing the load of the config as a whole based on the worst performing card as a whole could bring. 

 That could be true if you do consider that you have a very bad 980 with you. But if you had two titan Zs then the results would be epic,truly.

Link to comment
Share on other sites

Link to post
Share on other sites

Quoting would be much more convenient if you have the ability to do so. Straight up responding to a thread isn't usually enough - most people don't follow topics.

 

What about games where the CPU happens to be stressed already? What then? Are your graphics cards now bottlenecked?

In cases like that the G3258/i5/ and all non hyper-threaded cpu's would no longer be good for gaming people would have to go out and purchase high end quad core or better cpu's with hyper-threading but at the same time it could finally bring AMD's cpu's back into competition since they are so much cheaper than Intel cpu's.

Link to comment
Share on other sites

Link to post
Share on other sites

Quoting would be much more convenient if you have the ability to do so. Straight up responding to a thread isn't usually enough - most people don't follow topics.

 

What about games where the CPU happens to be stressed already? What then? Are your graphics cards now bottlenecked?

 

There is no game that does this. There are ramblings all over the internet. By the way, if you do have a a couple of thousand dollars worth of card in your system, why wouldn't you bother to put in a good CPU?

Link to comment
Share on other sites

Link to post
Share on other sites

I would like a citation if you can find it.

I'll look and see if I can find it I just remember someone posted it in the tech reviews and news section last week I never found it on my own.

Link to comment
Share on other sites

Link to post
Share on other sites

There is no game that does this. There are ramblings all over the internet. By the way, if you do have a a couple of thousand dollars worth of card in your system, why wouldn't you bother to put in a good CPU?

Because people target the i5-4690k as the optimal cpu for gaming performance so people just get that and invest the rest into their gpu's. With what Godly is pointing out is the fact it would cause the i7-4790k to take the place of the i5-4690k as the optimal cpu in that case since hyper-threading and overclocking would be a must.But at the same time it would finally give the 5920-5960x a place in pc gaming rather than it just being a status symbol.

Link to comment
Share on other sites

Link to post
Share on other sites

I would like a citation if you can find it.

Here you go https://twitter.com/Thracks/status/561708827662245888 from AMD rep Robert Hallock.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque euismod suscipit est, non placerat turpis vestibulum sed. Phasellus et faucibus odio. Donec a nisi at purus porttitor ullamcorper auctor a nibh. Integer id neque a nisi laoreet ultrices id ac augue. Nulla facilisi. Nullam purus elit, dictum quis euismod vitae, mollis non dui. Morbi vehicula neque eu mattis bibendum. Curabitur sed odio tortor. Sed euismod mi in diam volutpat, vitae convallis ipsum mattis. Praesent eleifend faucibus pulvinar. Interdum et malesuada fames ac ante ipsum primis in faucibus. Vestibulum velit nunc, fermentum a libero a, venenatis tempus lectus.

Link to comment
Share on other sites

Link to post
Share on other sites

It has been quite a while since nvidia first brought out sli technology, but they haven't still sorted out the issues that broom within the tech itself. I think I may have a solution to that, but it is still a trial and error process.

 

I think Nvidia should unload all of the existing sli scheduling loads on the cpu. This way the GPUs don't have to bother about organising or separating their loads. In games the cpu sits ducks, a better way to utilise that piece of expensive silicon would be this. 

 

To do this, the driver could seperate the screen into equal symmetrical sections (according to the number of cards) so that the gpus together would try to produce the whole frame. This also means that the total available frame buffer could be pooled together. Then the driver would stitch them back together and display it on the screen.

 

Here is an image to demonstrate what I mean.

 

Share what you think about this. Nvidia should really think about this issue.

Dude... Do you really think you are the one who came up with this? That is the system behind combining the Vram in SLI and Crossfire. (According to the internet. Too lazy to find the source.) 

My Pc: Cpu: I7 4790k @ 4,8 Ghz @1,25 volts, Motherboard: Maximus VII Formula, Graphicscard: 2x Titan X @1,4 at stock voltage (soon to be replaced), Ram: 32 gb Corsair Dominator Platinum 2133 Cl 9, SSD: 2x 850 Pro 1 Tb in Raid 0, 2x Samsung 840 Evo 1 tb raid 1,, PSU: Corsair HXi Series 1000W 80+ Platinum, Case: H440. Soon to be watercooled again, maybe, probably, the Titans are too loud. 

Link to comment
Share on other sites

Link to post
Share on other sites

Dude... Do you really think you are the one who came up with this? That is the system behind combining the Vram in SLI and Crossfire. (According to the internet. Too lazy to find the source.)

VRAM is not combined in SLI or Crossfire.

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

Dude... Do you really think you are the one who came up with this? That is the system behind combining the Vram in SLI and Crossfire. (According to the internet. Too lazy to find the source.) 

 

Only that VRAM isn't combined in SLI.

Link to comment
Share on other sites

Link to post
Share on other sites

Only that VRAM isn't combined in SLI.

It is potentially possible with lower level APIs like mantle and dx12. Read through the link in my prev post.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque euismod suscipit est, non placerat turpis vestibulum sed. Phasellus et faucibus odio. Donec a nisi at purus porttitor ullamcorper auctor a nibh. Integer id neque a nisi laoreet ultrices id ac augue. Nulla facilisi. Nullam purus elit, dictum quis euismod vitae, mollis non dui. Morbi vehicula neque eu mattis bibendum. Curabitur sed odio tortor. Sed euismod mi in diam volutpat, vitae convallis ipsum mattis. Praesent eleifend faucibus pulvinar. Interdum et malesuada fames ac ante ipsum primis in faucibus. Vestibulum velit nunc, fermentum a libero a, venenatis tempus lectus.

Link to comment
Share on other sites

Link to post
Share on other sites

It is potentially possible with lower level APIs like mantle and dx12. Read through the link in my prev post.

 

It is possible through that, but that entire thing is improved with this implementation.

Link to comment
Share on other sites

Link to post
Share on other sites

I was hoping some expert would show up and give his thoughts into the topic. Maybe this could be the next big thing??

Link to comment
Share on other sites

Link to post
Share on other sites

It is possible through that, but that entire thing is improved with this implementation.

I'll be honest with what AMD is about to bring to the table with the 300 series I will bet Nvidia already has this figured out and just haven't released it so it can be an answer to the 300 series.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm just waiting for @LukaP to show up and make sense of this all.

 

lol.

 

I do wish this would work, bout to get an SLI and triple monitor setup going so I want SLI to be running as smoothly as possible.

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

I'm just waiting for @LukaP to show up and make sense of this all.

 

lol.

 

I do wish this would work, bout to get an SLI and triple monitor setup going so I want SLI to be running as smoothly as possible.

TLDR me and i will possibly say something in this post. cba to read the thread

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

I'll be honest with what AMD is about to bring to the table with the 300 series I will bet Nvidia already has this figured out and just haven't released it so it can be an answer to the 300 series.

 

That could be a possibility, but releasing a driver that does what I just thought of would just be fuckin awesome. Given you could then make use of upto 4 cards now, maybe eight in the future, or 16!!! This thing just opens up a whole range of possibilities. Hope they make this happen sometime soon.

Link to comment
Share on other sites

Link to post
Share on other sites

That could be a possibility, but releasing a driver that does what I just thought of would just be fuckin awesome. Given you could then make use of upto 4 cards now, maybe eight in the future, or 16!!! This thing just opens up a whole range of possibilities. Hope they make this happen sometime soon.

I doubt it'll be implemented with a driver update. More than likely the next release of cards like the 1000 series. Because if they did it through drivers it would make upgrading irrelevant for most people for a long time.

Link to comment
Share on other sites

Link to post
Share on other sites

Imagine 400% scaling with 4 cards on a 144Hz 4K Gsync enabled display.  :rolleyes:

Link to comment
Share on other sites

Link to post
Share on other sites

I doubt it'll be implemented with a driver update. More than likely the next release of cards like the 1000 series.

 

This can be done through driver tweaks. But anyways, I think Maxwell V2(980Ti) should feature enhanced SLI compatibility.

Link to comment
Share on other sites

Link to post
Share on other sites

Giving each card a part of the frame to render has been tried already. It's called SFR(Split-Frame Rendering). AFR was chosen in favor of SFR because it gave higher frame rates overall.

http://en.m.wikipedia.org/wiki/Scalable_Link_Interface

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×