Jump to content

nicebyte

Member
  • Posts

    12
  • Joined

  • Last visited

Reputation Activity

  1. Like
    nicebyte got a reaction from Kamjam66xx in Multi-Threading C++ & OpenGL   
    >  I guess ill save any serious attempts at multi-threading for Vulkan then, is that the conclusion i should be leaving with?
     
    Yeah I would recommend that. Focus on the fundamentals and try not to get too bogged down in the api details yet.
  2. Like
    nicebyte got a reaction from Kamjam66xx in Multi-Threading C++ & OpenGL   
    to drive the point home, the opengl backend of my homegrown gfx lib  is ~2000 lines of code 
    Is the vulkan one is about the same size now, but it's nowhere near being feature complete AND i've "outsourced" gpu memory management to AMD's VMA library (which could easily add anothe 1K lines if i did it myself).
    I definitely don't want to discourage anyone from learningVulkan, but those who are considering need to understand that graphics APIs are not about graphics, they are about abstracting the GPU. Learning DX12 or Vk will take a nontrivial amount of time during which you will not be dealing with actual "graphics", i.e. making pretty images. Instead, you'll be figuring out how to be efficient at feeding data into a massively parallel computer attached to your regular computer  this can be interesting in and of itself, but make sure you understand what you're getting into!
  3. Informative
    nicebyte got a reaction from Kamjam66xx in Multi-Threading C++ & OpenGL   
    That just sounds that you're calling OpenGL on a thread with no active OpenGL context. 
     
    However, in general, it is barely possible to get an appreciable speedup from an OpenGL renderer by using multithreading. Don't hope that you can, for example, call the shadow map rendering commands on one thread and scene rendering commands on the other - that will not work: the opengl driver will just synchronize those threads and everything will effectively become serialized, losing any benefit you may have had from parallelism. This isn't a question of driver quality, it's a fundamental constraint caused by the design of OpenGL. So, you're better off calling OpenGL on just one thread.
     
    One exception to this is loading textures/meshes etc from disk. Since most time is spent waiting on file reads it makes sense to split resource loading/texture and buffer creation into a separate thread(s) - create a shared context on the resource loading thread, load your textures/models on it while you do other stuff. This could improve your loading times. 
     
    If you are interested in building a multithreaded renderer, the best path forward is with new APIs - DX12 or Vulkan. They allow to split the driver overhead of recording command buffers onto multiple different threads, thus making better use of you cpu's many cores. This comes at a price of needing to handle GPU-side synchronization and memory management yourself though - it is a very daunting task and I don't think someone who is beginning graphics should bother with it. I promise you it's way more fun to play with lights and materials than to look for synchronization bugs in your vulkan code
     
  4. Like
    nicebyte got a reaction from Mira Yurizaki in Multi-Threading C++ & OpenGL   
    to drive the point home, the opengl backend of my homegrown gfx lib  is ~2000 lines of code 
    Is the vulkan one is about the same size now, but it's nowhere near being feature complete AND i've "outsourced" gpu memory management to AMD's VMA library (which could easily add anothe 1K lines if i did it myself).
    I definitely don't want to discourage anyone from learningVulkan, but those who are considering need to understand that graphics APIs are not about graphics, they are about abstracting the GPU. Learning DX12 or Vk will take a nontrivial amount of time during which you will not be dealing with actual "graphics", i.e. making pretty images. Instead, you'll be figuring out how to be efficient at feeding data into a massively parallel computer attached to your regular computer  this can be interesting in and of itself, but make sure you understand what you're getting into!
  5. Agree
    nicebyte got a reaction from Mira Yurizaki in Multi-Threading C++ & OpenGL   
    That just sounds that you're calling OpenGL on a thread with no active OpenGL context. 
     
    However, in general, it is barely possible to get an appreciable speedup from an OpenGL renderer by using multithreading. Don't hope that you can, for example, call the shadow map rendering commands on one thread and scene rendering commands on the other - that will not work: the opengl driver will just synchronize those threads and everything will effectively become serialized, losing any benefit you may have had from parallelism. This isn't a question of driver quality, it's a fundamental constraint caused by the design of OpenGL. So, you're better off calling OpenGL on just one thread.
     
    One exception to this is loading textures/meshes etc from disk. Since most time is spent waiting on file reads it makes sense to split resource loading/texture and buffer creation into a separate thread(s) - create a shared context on the resource loading thread, load your textures/models on it while you do other stuff. This could improve your loading times. 
     
    If you are interested in building a multithreaded renderer, the best path forward is with new APIs - DX12 or Vulkan. They allow to split the driver overhead of recording command buffers onto multiple different threads, thus making better use of you cpu's many cores. This comes at a price of needing to handle GPU-side synchronization and memory management yourself though - it is a very daunting task and I don't think someone who is beginning graphics should bother with it. I promise you it's way more fun to play with lights and materials than to look for synchronization bugs in your vulkan code
     
  6. Informative
    nicebyte got a reaction from Kamjam66xx in GLSL error highlighting HELP ME!   
    OpenGL ES is just a version of OpenGL for mobile devices. I would not say that it is simpler. Later versions of OpenGL ES (3.1+) supported by more powerful devices like adreno 630 are getting close in terms of capabilities to the desktop counterpart (i.e. 3.2 has compute shaders). Earlier versions (GL ES 2.0) have a  smaller API surface, however they have limited capabilities, making it harder to do certain things (and effectively sometimes forcing you to have two different paths if you want to support older hardware). GL ES 2 market share has been shrinking though. 
  7. Informative
    nicebyte got a reaction from Kamjam66xx in GLSL error highlighting HELP ME!   
    If you do decide to go the CMake route, look into the `add_custom_command` directive. You can get glslangValidator binaries from the Khronos website (https://www.khronos.org/opengles/sdk/tools/Reference-Compiler/) or just compile it from source (https://github.com/KhronosGroup/glslang).
     
    I could post a snippet from my own cmake file here however, it won't work for you because my setup is most likely different from yours. If you try it and get stuck, just post here, maybe i'll be able to help.
  8. Informative
    nicebyte got a reaction from Kamjam66xx in GLSL error highlighting HELP ME!   
    If you're using visual studio and cmake, it's possible to pre-validate your shaders with glslangValidator as part of the build (that's what I do). It reports shader compile errors within the IDE (not in realtime though, only when you build) before the application has the chance to run and show you a black screen ?
     
    In a pinch, you could try using http://shader-playground.timjones.io (just make sure to pick GLSL as the source language). 
    It has the added advantage of showing you the generated SPIR-V or DXIL (in case you want to explore that), and ability to chain different tools together (like putting generated SPIR-V through SPIRV-Cross).
  9. Agree
    nicebyte got a reaction from DrMacintosh in Taking Intro to Java   
    How limited are we talking? In my opinion, there's a big difference between "no experience whatsoever" and something like "wrote some lua script for WoW one time", for example. It's always intimidating to start from zero.
     
    That being said, the course looks like it aims to get students over the basic hurdles of getting a dev environment set up and writing and running simple programs. Most of these things will be straightforward and won't require a lot of thinking; the course will just slowly ease you into the programmer mindset. Of course, a lot depends on the instructor as well, but I think you'll be fine even if you've never programmed before.
×