Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

mathijs727

Member
  • Content Count

    834
  • Joined

  • Last visited

Awards


This user doesn't have any awards

3 Followers

About mathijs727

  • Title
    Junior Member
  • Birthday 1995-07-27

Profile Information

  • Gender
    Male
  • Location
    Netherlands
  • Occupation
    Computer Science Master (2nd year)

System

  • CPU
    AMD R9 3900X
  • Motherboard
    ASUS X470-F Gaming
  • RAM
    16GB Corsair Vengeance 3000Mhz CL15
  • GPU
    MSI GTX1080 Gaming
  • Case
    Corsair 450D
  • Storage
    500GB Samsung 970 EVO + 256GB Samsung 830 + 3TB HDD
  • PSU
    RM550x
  • Display(s)
    Asus MG279Q
  • Cooling
    Corsair H100i with Noctua NF-F12's
  • Keyboard
    Coolermaster Masterkeys Pro S RGB
  • Mouse
    Razer Deathadder Elite
  • Sound
    HyperX Cloud | Audio Technica ATH-M50X
  • Operating System
    Windows 10 Pro

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Since this is for professional use you should look at professional hardware. Both AMD as well as Nvidia sell hardware specifically for this use case. Not only do those cards support way more displays per card, they also come with extra features such as unifying all displays into one virtual monitor and bezel correction: https://www.nvidia.com/en-us/design-visualization/solutions/nvidia-mosaic-technology/ https://www.amd.com/en/technologies/eyefinity-professionals With some Displayport hubs it might be possible to connect 9 displays to a single GPU (if it supports 9 displays - consumer cards are usually limited to 3 or 4): https://www.club-3d.com/en/detail/2409/multi_stream_transport_(amst)_hub_displayportt_1.2_triple_monitor/
  2. If I were you I would just contact Asus directly and ask what the exact problem could be and whether it can be fixed with a future bios update. Also, running 3 GPUs on a low-end board is kinda weird already, especially 3 mid-range graphics cards (instead of 1 or 2 high-end ones).
  3. Looking at the manual (page 9 / ix), I think it is expected for the second slot to run in PCIe-3 4x mode: https://dlcdnets.asus.com/pub/ASUS/mb/SocketAM4/ROG_STRIX_B450_F_GAMING/E14401_ROG_STRIX_B450-F_GAMING_UM_WEB.pdf The 3rd slot is running of the motherboard chipset at PCIe-2 4x speeds. I assume that with the 3900x (which introduced a lot of changes with respect to I/O) running a GPU over the chipset (on this motherboard?) is broken.
  4. His answer is: no your motherboard and CPU are not damaged. Overclocked RAM (which this is) not working (in all slots) is expected behavior.
  5. Im running a 3900X + GTX1080 on a 550W PSU without issues (although I admit that 550W might be stretching it a bit), 750W is more than enough. EDIT: CPU package power is ~145W max (without PBO) in Cinebench R20. GPU power maxed out at 160W in CS:GO although it might get a bit higher in more GPU intensive games (rated at 180W). So even a 550W PSU is well within spec for such a setup.
  6. I think they just changed the line-up to only include higher wattage units, combined with a new white color scheme: https://www.corsair.com/eu/en/Categories/Products/Power-Supply-Units/c/Cor_Products_PowerSupply_Units?q=%3Afeatured%3ApsuPowerEfficiency%3AGold%3ApsuLinkSupport%3ANo&text=#rotatingText
  7. RTX does not use octrees, it uses Bounding Volume Hierarchies (BVH) which have been the most popular acceleration structure in ray tracing for years. For simple scenes the BVH is a tree hence ray traversal = tree traversal. However when instancing comes into play a BVH node can have multiple parents so it turns into a DAG structure. Also, GPUs have been outperforming (similarly priced) CPUs for years so I wouldn't call it something recent (before RTX GPUs were already much faster). Ray traversal also requires back tracking (most commonly using a traversal stack) so that's not an argument. The only real difference between ray tracing and maybe some other graph traversal applications is the amount of computation that has to be done at each visited node (ray / bounding box intersections in the case of ray tracing). And graph traversal itself isn't that branch heavy either. You basically have the same operation (visiting a node) repeated in a while loop. Sure, selecting the next child node contains some branches but those are one-liners. For example in the case of ray tracing: if left child is closer than push right child to the stack first, otherwise push left child first. Computing which child is closest (and whether it is hit at all) is computationally intensive and not very branch heavy. A bigger issue with ray tracing is the lack of memory coherency which reduces the practical memory bandwidth on the GPU (having to load a cache line for each thread + the ith thread not always accessing the i*4th byte in a cache line). Nvidia themselves also promote their GPUs as being much faster at graph analysis than CPUs: https://devblogs.nvidia.com/gpus-graph-predictive-analytics/
  8. GPUs can still be faster for graph algorithms. In a sense, ray tracing is also a graph traversal algorithm (directed acyclic graph) and GPUs do pretty well there (compared to similarly priced CPUs).
  9. Did you compare the prices in combination with a X570 motherboard or a B450/X470 mobo? Buying a 3700X with a X570 makes little sense (cheap CPU with expensive mobo) unless you really need the added features (PCI-E 4). Look into B450/X470 motherboards that support flashing the bios without a CPU (none of the current stock of motherbaords will have an up-to-date bios), should be much cheaper than a 9900K.
  10. Here is how a C++ implementation would look like: #include <algorithm> #include <vector> #include <iostream> void alteringFunction(std::vector<int>& v) { // Add number (42) to the array v.push_back(42); // Sort array std::sort(std::begin(v), std::end(v)); // No need to return since we modified the vector in-place } int main() { std::vector<int> data { 5, 8, 1, 3, 4 }; alteringFunction(data); // Print output for (int v : data) std::cout << v << std::endl; } Note that C++ comes with a vector container that can dynamically grow (or shrink) unlike C. It also ships with a set of useful algorithms including sort, which is much faster than bubble sort for larger arrays (required to have O(N log N) average case time complexity).
  11. In C, you could just pass a pointer to the start of the array + a number representing the size of the array. In C++ you can store the data in a std::array and pass a reference to that array. Adding to the array won’t be possible since it is fixed size (although you could add a dummy value when creating the array). In C++ you can also use a std::vector instead, which does support growing (and shrinking) dynamically. I don’t know about C, but sorting in C++ is easy: std::sort(std::begin(vector), std::end(vector));
  12. I’m not sure how you installed python & pip but it seems like pip is not in your path. Ensure that the Python scripts folder is in your path (https://projects.raspberrypi.org/en/projects/using-pip-on-windows) Alternatively, just reinstall Python and make sure to check the “set PATH for all users” checkbox.
  13. You need to run this from the command line: python -m pip install redis Or: pip install redis Also, don't store your commands in a bash file but just open command prompt and manually enter and execute them (then it will show you the error without immediately closing).
  14. Don’t use a single block, that will force the kernel to run on only a single CU (32 cores). You should decide on the number of threads per block (multiple of 32, max of 512 IIRC) and scale the number of blocks appropriately with the number of work items. Apart from trying all possibilities yourself, CUDA also has a function that tries to guess the optimal block size for your kernel: https://devblogs.nvidia.com/cuda-pro-tip-occupancy-api-simplifies-launch-configuration/ Note that the optimal block size might change from kernel to kernel because of registry pressure. Also make sure that you schedule a couple times more blocks than you have conpute units such that the GPU can efficiently hide memory latency (problem size should be at least a couple times larger than the number of cores in your GPU).
  15. Just checked: my VS2017 installation folder (C:/Program Files (x86)/Microsoft Visual Studio/2017) has a size of a "whopping" 2.34GB. I assume that this does not include the Windows SDK itself but its size clearly doesn't deviate much from VS2019. I think you are confused with VS2015 where you had to install everything at once. Since VS2017 you can use the installer to select what features you want (everything is deselected by default) and for just a C++ installation you only need a couple GB. And like @straight_stewie said, VS2017 is the easiest C++ environment to set up and use. Also, Visual Studio has some features that are just plain better than any of its competitors (Intellisense).
×