Jump to content
  • entries
    63
  • comments
    51
  • views
    20,361

Why what happens in consoles, actually does matter in PC gaming

Mira Yurizaki

714 views

I'm going to be edgy and start off with the exact opposite headline from my last blog! Okay seriously...

 

If there's one thing I like about console gaming development is that it challenges the developers to do creative things. It brings out the best in those developers and I salute them for that. Heck, even in the early days of PC gaming, this mentality was around and about care of the likes of John Carmack and Ken Silverman. But lately, because of the relatively long life cycle of consoles (4-5 years per generation) and with the first time the current generation of consoles performing worse than contemporary PCs at the time of release, PC gamers have been snide about console gaming development.

 

The biggest question I do want address is this: Does console gaming development hold back PC gaming development? The answer, is a big fat NO.

 

Here's where I appreciate consoles for aiding the development of gaming in general.

  • Let's start with one of my favorite consoles around this subject: the Nintendo 64. From a developmental point of view, it produced a lot of the foundations for modern 3D visuals.
    • Let's go over the hardware first, the Reality Co-Processor (RCP).
      • The RCP is arguably one of the first consumer grade programmable hardware transform and lighting GPUs. Programmable because the RCP was fed display lists as commands (which is how modern GPUs work) and once Nintendo released the chains on the microcode documentation, developers were able to push the system even further.
      • The RCP had support for several advanced features for the time, such as trilinear mipmapping and edge based AA (which we know in flavors such as as FXAA and MLAA).
      • In some sense of hilarity in hindsight, the RCP handled audio because the RCP had a digital signal processor. AMD's TrueAudio doesn't look so unique now doesn't it?
    • And now the software tricks that were used
      • Level of detail: Yup, this trick to save on rendering polygons when something is far away had its start on at least here, as shown in this video:
      • Use of clipping, where scenes that aren't immediately visible to the player are not rendered.
      • Banjo Kazooie had a feature that shuffled textures in and out for detail. The problem was this caused memory fragmentation. The way Rare solved this? Adding a real-time memory defragger.
      • Factor5 developed texture streaming from the cartridge for Indiana Jones and the Infernal Machine. We can blame them for id trying to do Megatextures (although in all seriousness, that's a different beast)
      • The N64 had a lot of framebuffer effects for things like motion blur, shadow mapping, optic camouflage effects, and one that amazes me still (and I never realized it): render to texture. You remember playing Mario Kart 64 on Luigi's Raceway and before the tunnel there was a TV screen that showed your viewpoint? That's render to texture. This was one of those features Valve was very proud of during their E3 2003 Half Life 2 demonstration. More effects can be seen at: http://gliden64.blogspot.com/2013/10/frame-buffer-emulation-intro.html
  • The Dreamcast had a feature that is just now being utilized for efficient rendering: tile-based rendering. Tile based rendering is dividing the screen into tiles, 32 by 32 pixels in the Dreamcast's case, and rendering on just that, saving the result into a buffer. This helped save on memory bandwidth because the entire buffer didn't need to be sent at once. A side effect of this was it offered a feature that AMD proudly demoed on their DX11 hardware: order-independent transparency.
  • The PlayStation 2 cemented how to create large open worlds when your storage bandwidth is, for all intents and purposes, abysmal. GTA San Andreas for example, has a map size as big as The Elder Scrolls: Oblivion.
  • The original Xbox introduced a rendering technique that would be used throughout the rest of the decade and then some: deferred rendering. Of all games to use it? The launch title Shrek. The creator's article on the subject can be found at https://sites.google.com/site/richgel99/home
    • Oh, not to mention that the Xbox's GPU was essentially next generation. While it may have been based on the GeForce 3 GPU, it's specifications put it more in line with a GeForce 4. The GeForce 3 topped out at  240MHz with a 4:1:8:4 (pixel, vertex, texture, ROP) configuration. The XGPU was 233MHz and used a 4:2:8:4 configuration. It would be something more like a GeForce 4 Ti 4000 if NVIDIA capitalized on it.
  • The PlayStation 3 and Xbox 360 forced game developers to really think about multithreaded gaming. Especially the PlayStation 3. It may not have translated so well into the PC gaming world since dual-core processors were just coming out at the time and support on Windows XP was shaky at best from what I recall.

One other thing I want to make note of is trying to tap into the console market for a lot of developers may have also forced them to optimize their engines. One that I can think of is what CryTek did with CryEngine 2 and beyond. Did anyone think Crysis could run reasonably as well  as it did on the Xbox 360 or PlayStation 3? Plus, it's easier to scale up, because you can always add more polygons, you can always add more textures, you can always add more things here and there.

 

But even if consoles weren't around, PC game developers wouldn't target the uber high end systems anyway. Take Doom for instance. Released in 1993 guess what its requirement was? It could run (technically) on a 386, a processor released seven years prior. Though it would run much better on a 486, which was released in 1989. Quake also ran on one the lower end Pentiums back in the day. Not quite as impressive in comparison to Doom. Duke Nukem 3D though was playable on a higher end 486, but probably not without some kind of 2D accelerator card. Anyway the point is, PC game developers are always going to target lower end systems for the sake of maximizing their market base anyway.

 

But lastly, going back to my first point about why I love console development, I see developing for PCs as being lazy, to use brute force methods because Moore's Law will pick up the slack. I see way more interesting things happening on console development more than I do PC gaming development. Sure game engines are still built primarily on the titans of PCs, but when it comes down to the more interesting things, it's probably a console that did it.

0 Comments

There are no comments to display.

×