Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

mathijs727

Member
  • Content Count

    845
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    mathijs727 got a reaction from PlayStation 2 in HDMI 1.4b max refresh rate (2560 x 1080)   
    I think HDMI 1.4 can only do 144Hz at 1920x1080 not 2560x1080.
    Following the logic in the top rated post of this Reddit topic the theoretical maximum frame rate is 98Hz (272e6 / (2560 * 1080)).
  2. Agree
    mathijs727 got a reaction from Eigenvektor in HDMI 1.4b max refresh rate (2560 x 1080)   
    I think HDMI 1.4 can only do 144Hz at 1920x1080 not 2560x1080.
    Following the logic in the top rated post of this Reddit topic the theoretical maximum frame rate is 98Hz (272e6 / (2560 * 1080)).
  3. Like
    mathijs727 got a reaction from rikitikitavi in What do you drive?   
    Bought a 2 year old Volkswagen High Up! to drive to work everyday.
     
    I don't need a larger/faster car since I'm traveling alone anyways.

     
    Just wish Volkswagen would have put a small turbo in it to boost performance (advertised as 60HP but more like 20/25HP at normal RPMs) and to improve the fuel economy on the highway.
    The fuel usage goes up quite dramatically over 90 kph which means I tend to stick behind lorries, which is quite relaxing actually.
  4. Like
    mathijs727 got a reaction from JoaoPRSousa in What do you drive?   
    Bought a 2 year old Volkswagen High Up! to drive to work everyday.
     
    I don't need a larger/faster car since I'm traveling alone anyways.

     
    Just wish Volkswagen would have put a small turbo in it to boost performance (advertised as 60HP but more like 20/25HP at normal RPMs) and to improve the fuel economy on the highway.
    The fuel usage goes up quite dramatically over 90 kph which means I tend to stick behind lorries, which is quite relaxing actually.
  5. Like
    mathijs727 got a reaction from pinksnowbirdie in What do you drive?   
    Bought a 2 year old Volkswagen High Up! to drive to work everyday.
     
    I don't need a larger/faster car since I'm traveling alone anyways.

     
    Just wish Volkswagen would have put a small turbo in it to boost performance (advertised as 60HP but more like 20/25HP at normal RPMs) and to improve the fuel economy on the highway.
    The fuel usage goes up quite dramatically over 90 kph which means I tend to stick behind lorries, which is quite relaxing actually.
  6. Like
    mathijs727 got a reaction from Founders in What do you drive?   
    Bought a 2 year old Volkswagen High Up! to drive to work everyday.
     
    I don't need a larger/faster car since I'm traveling alone anyways.

     
    Just wish Volkswagen would have put a small turbo in it to boost performance (advertised as 60HP but more like 20/25HP at normal RPMs) and to improve the fuel economy on the highway.
    The fuel usage goes up quite dramatically over 90 kph which means I tend to stick behind lorries, which is quite relaxing actually.
  7. Like
    mathijs727 got a reaction from BecauseICanTBH in What do you drive?   
    Bought a 2 year old Volkswagen High Up! to drive to work everyday.
     
    I don't need a larger/faster car since I'm traveling alone anyways.

     
    Just wish Volkswagen would have put a small turbo in it to boost performance (advertised as 60HP but more like 20/25HP at normal RPMs) and to improve the fuel economy on the highway.
    The fuel usage goes up quite dramatically over 90 kph which means I tend to stick behind lorries, which is quite relaxing actually.
  8. Like
    mathijs727 got a reaction from PlayStation 2 in What do you drive?   
    Bought a 2 year old Volkswagen High Up! to drive to work everyday.
     
    I don't need a larger/faster car since I'm traveling alone anyways.

     
    Just wish Volkswagen would have put a small turbo in it to boost performance (advertised as 60HP but more like 20/25HP at normal RPMs) and to improve the fuel economy on the highway.
    The fuel usage goes up quite dramatically over 90 kph which means I tend to stick behind lorries, which is quite relaxing actually.
  9. Agree
    mathijs727 got a reaction from kriptcs in Ryzen 9 3900x with 750W PSU, is it a stretch?   
    Im running a 3900X + GTX1080 on a 550W PSU without issues (although I admit that 550W might be stretching it a bit), 750W is more than enough.
     
    EDIT:
    CPU package power is ~145W max (without PBO) in Cinebench R20.
    GPU power maxed out at 160W in CS:GO although it might get a bit higher in more GPU intensive games (rated at 180W).
     
    So even a 550W PSU is well within spec for such a setup.
  10. Informative
    mathijs727 got a reaction from Fourthdwarf in Programming - GPU intensive things?   
    RTX does not use octrees, it uses Bounding Volume Hierarchies (BVH) which have been the most popular acceleration structure in ray tracing for years. For simple scenes the BVH is a tree hence ray traversal = tree traversal. However when instancing comes into play a BVH node can have multiple parents so it turns into a DAG structure.
     
    Also, GPUs have been outperforming (similarly priced) CPUs for years so I wouldn't call it something recent (before RTX GPUs were already much faster).
     
    Ray traversal also requires back tracking (most commonly using a traversal stack) so that's not an argument. The only real difference between ray tracing and maybe some other graph traversal applications is the amount of computation that has to be done at each visited node (ray / bounding box intersections in the case of ray tracing). And graph traversal itself isn't that branch heavy either. You basically have the same operation (visiting a node) repeated in a while loop. Sure, selecting the next child node contains some branches but those are one-liners. For example in the case of ray tracing: if left child is closer than push right child to the stack first, otherwise push left child first. Computing which child is closest (and whether it is hit at all) is computationally intensive and not very branch heavy. A bigger issue with ray tracing is the lack of memory coherency which reduces the practical memory bandwidth on the GPU (having to load a cache line for each thread + the ith thread not always accessing the i*4th byte in a cache line).
     
    Nvidia themselves also promote their GPUs as being much faster at graph analysis than CPUs:
    https://devblogs.nvidia.com/gpus-graph-predictive-analytics/
  11. Informative
    mathijs727 got a reaction from Hi P in Programming - GPU intensive things?   
    RTX does not use octrees, it uses Bounding Volume Hierarchies (BVH) which have been the most popular acceleration structure in ray tracing for years. For simple scenes the BVH is a tree hence ray traversal = tree traversal. However when instancing comes into play a BVH node can have multiple parents so it turns into a DAG structure.
     
    Also, GPUs have been outperforming (similarly priced) CPUs for years so I wouldn't call it something recent (before RTX GPUs were already much faster).
     
    Ray traversal also requires back tracking (most commonly using a traversal stack) so that's not an argument. The only real difference between ray tracing and maybe some other graph traversal applications is the amount of computation that has to be done at each visited node (ray / bounding box intersections in the case of ray tracing). And graph traversal itself isn't that branch heavy either. You basically have the same operation (visiting a node) repeated in a while loop. Sure, selecting the next child node contains some branches but those are one-liners. For example in the case of ray tracing: if left child is closer than push right child to the stack first, otherwise push left child first. Computing which child is closest (and whether it is hit at all) is computationally intensive and not very branch heavy. A bigger issue with ray tracing is the lack of memory coherency which reduces the practical memory bandwidth on the GPU (having to load a cache line for each thread + the ith thread not always accessing the i*4th byte in a cache line).
     
    Nvidia themselves also promote their GPUs as being much faster at graph analysis than CPUs:
    https://devblogs.nvidia.com/gpus-graph-predictive-analytics/
  12. Informative
    mathijs727 got a reaction from Hi P in Programming - GPU intensive things?   
    GPUs can still be faster for graph algorithms. In a sense, ray tracing is also a graph traversal algorithm (directed acyclic graph) and GPUs do pretty well there (compared to similarly priced CPUs).
  13. Like
    mathijs727 got a reaction from Redturtle098 in How to pass and return arrays in c?   
    In C, you could just pass a pointer to the start of the array + a number representing the size of the array.
    In C++ you can store the data in a std::array and pass a reference to that array.
    Adding to the array won’t be possible since it is fixed size (although you could add a dummy value when creating the array).
    In C++ you can also use a std::vector instead, which does support growing (and shrinking) dynamically.
     
    I don’t know about C, but sorting in C++ is easy:
    std::sort(std::begin(vector), std::end(vector));
  14. Funny
    mathijs727 reacted to dwang040 in Using pip in cmd and installing tensorflow (python)   
    Idk, maybe something isn't right like you said. I don't remember if I checked off set PATH for all users (I would have assumed I did) but it's possible I didn't. 
     
    If I were to type pip install redis directly into cmd, it will say "pip is not a recognized command." Again, it is possible because the PATH isn't correctly configured.  
    I will give a look into venv in the future. Just curious, why is it better to create a venv than using cmd to install modules? 
     
    UPDATE: I just reinstalled/ modified python and "pip install <package>" is now working fine in the cmd. Thanks for the help. 
  15. Agree
    mathijs727 reacted to straight_stewie in [Noob Question] Setting up Eclipse C++ with MinGW?   
    I just have to say, I have found VS to be, by far, the least confusing and the easiest to use IDE ever, and I've also used a few of the JetBrains IDEs. The only toolchain that I've found comparable in ease would be the classic VIM and GCC/G++ combination, but even that has a fairly steep learning curve for someone unfamiliar with VIM.

    I would suggest that we stop Microsoft bashing and help OP with the question, which was:
    The question didn't mention anything about needing to be small, or portable, or how much OP hates big bad Microsoft.

    So, in light of that, I still believe that the best option is for @Divergent2000 to install Visual Studio 2017 Community and check the C++ development option in the installer. If OP needs it to be a small install, he can simply deselect all of the .NET stuff and not worry about having C# functionality.
     
    There is simply no other IDE with the sheer amount of development effort and quality, documentation, and community support as Visual Studio, and whether you, as a developer with some experience, would currently choose that or not, it is clearly the best solution to OP's problem of finding easy to reach C++ development on Windows.
     
  16. Informative
    mathijs727 got a reaction from straight_stewie in Multi-Threading C++ & OpenGL   
    Like others have said, OpenGL is not thread safe.
    However, for any toy application that you are building I would not expect OpenGL command submission to be the bottleneck.
    Calls to OpenGL functions are deferred to the drive so there is no waiting involved.
    When you submit a drawcall the API driver will check that what you're doing is legal and will then forward the result to a work queue for the kernel-mode part of the driver to process which might do some more error checking, schedule requests between different programs, convert the commands to a GPU compatible format and upload those commands to the GPUs internal command queue.
    Note that your program does not wait for the kernel-mode driver (and thus also won't wait for triangles to be drawn by the GPU).
     
    With all due respect, but if draw calls are indeed a bottleneck (in your hobby OpenGL project which does not have a 100 square km game map filled with high quality assets) then you are probably doing something wrong.
    Make sure that you are not using the legacy fixed-function pipeline (submitting triangles with glVertex calls) and instead use "modern" OpenGL (fixed-function pipeline was deprecated in OpenGL 3.0 (2008) and removed starting from OpenGL 3.1 (2009!)):
    https://www.khronos.org/opengl/wiki/Fixed_Function_Pipeline
     
    Another way to reduce driver overhead is to use the functions added in recent OpenGL versions (>4.3 IIRC).
    This collection of new features is often referred to as AZDO ("Approaching Zero Driver Overhead") which was presented at GDC (Game Developer Conference):
    https://gdcvault.com/play/1020791/Approaching-Zero-Driver-Overhead-in
    https://gdcvault.com/play/1023516/High-performance-Low-Overhead-Rendering      (2016 presentation with some new stuff).
     
    Also, be sure to check out gdcvault, the video-on-demand service of GDC), it contains a ton of very interesting and useful presentations (note that some presentations are behind a paywall (video mostly, slide decks are usually available) which usually gets removed after a year or two).
     
    A good way to greatly improve GPU performance is by applying frustum and/or occlusion culling.
    With frustum culling we try to check whether an object (a collection of primitives) might possibly be visible with respect to the camera frustum (whether it's inside the field of view).
    Frustum culling is an easy optimisation that only requires you to know the bounding volumes of the objects (which you can compute ahead of time).
    You simply check for each object whether its bounding volume overlaps with the cameras view frustum (google "frustum culling" for info on how to implement that test).
    Note that this type of frustum culling is easily parallelizable both with multi-threading and SIMD (or even on the GPU with indirect draw commands).
    If you have a very complex scene then you could also experiment with hierarchical culling where you store the objects in a tree structure (like a bounding volume hierarchy) and traverse the tree, only visiting child nodes when their bounding volume overlaps with the view frustum.
    Note that this does make multi-threading and SIMD optimizations somewhat harder (an easy way to properly utilise SIMD in this case is to use a wider tree (ie 4 or 8 children per node)).
    Although this might result in fewer overlap tests (when most of the objects are not visible) it does not map that well to modern hardware (many cache hits will mean a lot of stalls on memory == lower performance).
    Frostbite for example switched from a fully hierarchical to a hybrid for BF3:
    https://www.gamedevs.org/uploads/culling-the-battlefield-battlefield3.pdf
    https://www.gdcvault.com/play/1014491/Culling-the-Battlefield-Data-Oriented
     
    Occlusion culling is a lot more complicated than frustum culling and there are many different solutions.
    The most popular solutions right now are based on screen-space techniques (like hierarchical z-buffer, HOM and IOM) because they map well to modern hardware (especially GPU) and can handle any arbitrary fully dynamic scenes.
    Like I mentioned this topic is a lot more complex than frustum culling and requires complex scenes (high depth complexity) to perform well.
    So I would recommend you not look into this too much until you've build a decently sized engine and the performance is GPU bottlenecked with no other obvious optimisations (like backface culling).
    Anyway here is some reading on occlusion culling in games:
    https://www.google.com/search?q=umbra+master+thesis    (first link. Master thesis by Timo Aila (currently a researcher at Nvidia Research with an impressive list of publications to his name). Umbra is now developed by the equally named company and the technology is used in games like The Witcher 3).
    https://www.gdcvault.com/play/1014491/Culling-the-Battlefield-Data-Oriented
    https://frostbite-wp-prd.s3.amazonaws.com/wp-content/uploads/2016/03/29204330/GDC_2016_Compute.pdf
    https://gdcvault.com/play/1017837/Why-Render-Hidden-Objects-Cull
    http://advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pdf
    And a interesting note: GPUs already implement hierarchical z-buffer culling to cull individual triangles (but not whole objects).
     
    With regards to multi-threading, what most game engines do is create their own command lists.
    Recording into these command lists can may be multi threaded and only execution (looping over the commands and calling the corresponding OpenGL functions) of the command lists has to be sequential.
    Furthermore, you could also apply multithreading to any other processing (like physics simulations) that you would like to do between the input phase (polling the keyboard/mouse. This does not take any significant amount of time) and the rendering phase.
    The best way to handle this in terms of throughput is to overlap rendering of frame N with the input+physics of frame N+1.
    Although this does add a frame of latency it helps with filling compute resources (e.g. fork/join creates waiting until the last task has finished and maybe not everything can multi-threaded (Amdahl's law)).
    A good way to get the most parallelism out of the system is to describe your program as a directed acyclic graph (DAG) of tasks.
    This allows the scheduler to figure out which tasks do not depend on each other such that they can be executed in parallel.
    If you're keen to work with Vulkan/DX12 then you might also want to apply the same concept to scheduling GPU commands.
    Some examples of task/frame graphs in practice:
    https://gdcvault.com/play/1021926/Destiny-s-Multithreaded-Rendering
    https://www.ea.com/frostbite/news/framegraph-extensible-rendering-architecture-in-frostbite
    https://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine
     
    Also, I would like to recommend you to ignore some of the previous advice in this forum thread on using std::thread for multi threading.
    Spawning an OS thread is relatively costly and in a game engine you want all the performance you can get.
    Furthermore, hitting a mutex means that the operating system will allow another thread to run which might be a completely different application.
    Instead I would recommend you to take a look at mulit-threaded tasking libraries which spawn a bunch of threads at start-up (usually as many threads as you have cores) and then do the scheduling of tasks themselves (using a (work stealing) task queue).
    Examples of these are Intel Threaded Building Blocks (TBB), cpp-taskflow, HPX (distributed computing focused), FiberTaskingLib and Boost Fiber.
    Note that the last 3 all use fibers (AKA user-land threads, AKA green threads) which are like operating system threads but where the programmer is in control of scheduling them.
    A well known example of using fibers for a tasking system in video games is the GDC presentation by Naughty Dog on porting The Last of Us to the PS4 (and running it at 60fps):
    https://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine
     
    Finally, if you care about performance try to read up on modern computer architecture (the memory system) and SIMD.
    Most game engine developers now try to apply "Data Oriented Design" which is a way of structuring your program in such a way that it makes it easy for the processor to process the data.
    This usually comes down to storing your data as a structure of arrays (SOA) which is better for cache coherency and makes SIMD optimisations easier (although DOD does cover more than just SOA).
     
    To learn more about the graphics pipeline, a lot of resources are available online describing how the GPUs programmable cores work (covering terms like warps/wavefronts, registry pressure, shared memory vs global memory, etc).
    If you are interested in learning more about the actual graphics pipeline itself (which contains fixed-function parts) then I would definitely recommend this read:
    https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/
     
    Also, writing a software rasterizer is a great way to get to learn the graphics pipeline and it is also a really good toy project to practice performance optimisations (maybe read up on project Larrabee by Intel).
     
    Sorry for the wall of text.
    Hopefully this will help you and anyone else trying to develop their first game/graphics engine and not knowing where to start (in terms of performance optimizations).
     
  17. Agree
    mathijs727 reacted to WereCat in 1920x stutters hard.   
    You know that 1920X supports quad channel? 
    Getting another 2x8GB 2400MHz sticks will help you way more than replacing what you have for another dual channel.
  18. Informative
    mathijs727 got a reaction from Nocte in Github Microsoft? :(   
    Private repos do not require a premium account as of this week:
    https://techcrunch.com/2019/01/07/github-free-users-now-get-unlimited-private-repositories/
  19. Like
    mathijs727 got a reaction from Kamjam66xx in Github Microsoft? :(   
    Private repos do not require a premium account as of this week:
    https://techcrunch.com/2019/01/07/github-free-users-now-get-unlimited-private-repositories/
  20. Agree
    mathijs727 reacted to Elochai in The gaming PC days are NUMBERED! (Sponsored)   
    Never once said I was living paycheck to paycheck. I said that life tends to toss situations that are unknown towards me recently and that have required me to dip into money I have saved up and put it elsewhere that’s more important. This is life, things happen. Other words priorities over luxuries. I could of built the computer I wanted a number of times if I just bought part for part each time I saved a bit, but that’s a bad approach for building a PC. So I rather lose $420 right now for a year then $3000 for one year. In just one month due to unseen events I had to use about $1500 of my savings for priorities. $1500 I can save back up but if I had spent that amount on a luxury then I would be paying back more then $1500 as I wouldn’t of had the full amount and would of had no choice but to use a credit card to sloved the situation at that moment. So do you have a crystal ball? Can you foresee the future? No didn’t think so, so I still like the idea that I don’t need to pop out a huge amount of money for something all at once that is like a car in value of  depreciation although your under the impression that it’s not. I rather pay a small about over time and always have the best of the best (again assuming the service works).
     
    cars lose their value over time, so does hardware. And yes hardware does fail. You don’t have to be rough with your stuff for them to ware over time and fail. I also wouldn’t recommend you buying a car by financing it, just like I wouldn’t recommend anyone to buy a computer on a credit card. But people often do these things for luxuries yet they don’t know when something will happen financially to them.
     
    So yes leasing / renting a luxury item that will always be upgraded and took care of for less then an overrated upfront cost does sound better to me for a disposable item that can have shorter terms then something that could hang over your head in the event of a financial issue.
     
    Now let’s do some simple math, I built my 1st gaming rig in 2006, I bought all parts from NCIX. I went for the best of the pretty much and it total a little over $3000 back then (I am one of them people who do things “go big or go home” that’s my joy and my pride), in 2008 I upgraded the video card to keep up with playing on max settings and got a deal on a used CPU maxing out the available CPU’s I could put in the system other then the $1000 extreme edition at the time. In 2010 I realized that it was time to start thinking new build as parts compatibility wasn’t really there anymore for upgrades but at the time though I was working a low income job although but liked my work (Ever find a job you liked doing). In 2012 the company I worked for started downsizing stores and hours. Not wanting a cut in my hours as it would effect me to the point that I wouldn’t be able to live within my means, I went to work for a trucking company. Pay was better, hours were very long (put a lot of tension on family life) due to the long hours I worked I given up on my gaming life as I didn’t get to play as much. I did buy an SSD to replace my failed Raptor hard drive and upgraded my RAM at some point to 8GB of DDR2 from the 4GB it used to be just to try and have a bit of performance boost for them days that I could relax to some gaming. The ram and SSD help, video card was a gen 3 on a gen 2 pci-e slot with a cpu that was bottlenecking it. So I gonna around it to about $400 in upgrades. Now after a couple years working 14 - 16 hours a day for the trucking company, I got ill and was taken off work for a year, my family was happy to have me home with them and when my doctor started telling me that I’ll soon be able to return to work, my partner, asked me not to return. Family very important to me so I went to work doing night shifts for a company I’m still with today. I still work long hours of 12 hours a night but I work less days, works out to be 7 days on and 7 days off but spread out in a 2 week period. The pay is 2x better as well and has allowed me to buy a used HP Z420 workstation to use for now to play all my games but again preformence is lacking.
     
    sorry again for the long post but your reply came off a bit trolly to my financial situations which my post was really along the lines that this service seems like a more practical solution in the long run then to take a financial risk on a luxury item.
     
    so back to the figures:
     
    $3400 for my original gaming rig and it’s upgrades (and it now pointless for gaming today)
     
    So it got its use for what it was meant to do from 2006 to 2010 but to be fair we’ll give it till 2014.
     
    So $3400 / 8 years = $425 with the last 3 - 4 years being crap for gaming with many games being min or below requirements.
     
    vs
     
    $420 a year for top of the line hardware and the best experience for gaming (if the system works as advertised)
     
    no mater what, I’m always going to want a gaming PC, I’m always gonna want it to be the best. In the long run this solution works best for me over building and upgrading all the time and the major up front costs for a luxury. Yes I could buy someone used stuff but that also has unnecessary risks.
     
    by the way, if you want to buy my rig for $300 be my guess. I wouldn’t advise it given the age of the hardware, seen my $600 video card on eBay the other day going for $45 lol. Like my buddy always said, a computer is not an investment, it something that depreciates a lot in value over a short period of time. And that old rig is used for storage now. If you meant $300 for my HP Z420 then sorry but that a good number cruncher and even if I had a new gaming rig, I wouldn’t part with it. My misses can take it as an upgrade for her office. If your interested I got a Dell PowerEdge 1750 server you can buy, it has out lived it use / replaced by an IBM server and hasn’t been on now for about 2 years. Cost more to ship it then what it be worth today, heck cost more for me to drive it up to the recycling depot or the scrap metal yard then what they go for today.
     
  21. Like
    mathijs727 got a reaction from Talon_3361 in What Linux? Please help me pick a distro.   
    Installing Arch is a bit of a pain though.
    You can use Antegros which is basically Arch with a GUI installer or Manjaro which is more fully fletched and comes with some helpful applications and it uses their own package repository.
    For developing C++ applications I found both very useful because they're rolling releases with updated compiler versions and a lot of C++ libraries available in the package repository.
  22. Agree
    mathijs727 got a reaction from xKyric in I'm confused, LTT   
    The first video is about 4k GAMING being dumb.
    There are other reasons to buy a monitor than gaming.
  23. Agree
    mathijs727 got a reaction from minibois in I'm confused, LTT   
    The first video is about 4k GAMING being dumb.
    There are other reasons to buy a monitor than gaming.
  24. Agree
    mathijs727 got a reaction from r2724r16 in I'm confused, LTT   
    The first video is about 4k GAMING being dumb.
    There are other reasons to buy a monitor than gaming.
  25. Agree
    mathijs727 got a reaction from BuzzLookAnAlien in I'm confused, LTT   
    The first video is about 4k GAMING being dumb.
    There are other reasons to buy a monitor than gaming.
×