Jump to content

Can you improve Minecraft's performance by better multi-threading?

Gat Pelsinger
Go to solution Solved by tikker,
6 hours ago, NvidiaFirePro6900XXTX3DPRO said:

It might be using all of my cores when generating chunks, but that is the physical multithreading. I am talking about the multithreading that is software sided. Windows just scales all the workload to different cores, but it's the program's job to be better optimized and assign different threads for different jobs.

That is really hard even though it sounds deceptively simple. You first have to identify whether multithreading is the actual bottleneck. Then you need to investigate if you even can paralellise the workload in the first place and how. The latter is often nontrivial to do properly because you need to think about memory usage, disk I/O, overhead from the threads etc. as well. Regarding the assignment of jobs to cores, in my experience with multithreaded stuff, generally if you think you can schedule smarter than the system, think again unless you know of specific optimisations for your problem. If you do it wrong and the threads end up being in each other's way too much it doesn't help in the best case and makes it slower than not multithreading at all in the worst case.

 

While a single problem may seem parallelisable, like maybe chunk loading, with complex things like games there is the other problem that that one problem is not the only thing that can multithread or wants your cores. Even in a utopia where everything multithreads perfectly and your CPU would be constantly pegged at 100% no matter what, you would still have the scheduling problem of who gets how much and when to do what.

 

6 hours ago, NvidiaFirePro6900XXTX3DPRO said:

I don't know what that threaded part means. I mean, there is still no "multi-threaded" option, right?

Usually threaded is synonymous with multithreaded in my experience, I would assume so here as well.

6 hours ago, NvidiaFirePro6900XXTX3DPRO said:

One more part that shows that multithreading can definitely improve performance is that if I play my world on a server, there is very little or no lag. This is because there is some part of actual multithreading happening here, except that fact that the other thread is running on the server, not your local machine.

Since you and the server are two separate machines it is a bit different from multithreading and does not immediately proof that more multithreading would solve local performance issues. You'd have to dissect what the server does and what the client does and then assess what kind of performance impact it would have if you machine would have to do everything.

6 hours ago, NvidiaFirePro6900XXTX3DPRO said:

And porting the game to different systems wouldn't take much effort.

Not really true. Maybe a bit of an extreme example, but just look at console games that come to PC, even the recent ones, and the absolute garbage performance that they sometimes (dare I say often) have. Compare that to something like DOOM 2016 or Eternal that (proverbially) run on just about anything and you can see that doing it right takes effort and knowledge of the system you are porting it to and that you can't always just plonk it down somewhere else and expect it to work with minimal updates or changes.

7 hours ago, NvidiaFirePro6900XXTX3DPRO said:

Also, we have optifine from over 10 years, but Mojang has never stepped and done it themselves. Although I don't know if it was because of legal issues, being lazy, not caring and focusing on updates, or just not being able to achieve it. I can understand about chunk loading lag, what about the general optimization of fps? Even when everything is stable, the GPU is under 50% and fps is totally trashed.

Profit-driven incentives aside, there is another side to this I think. A modder or small modding team can focus entirely on the single thing that that mod is supposed to do. Optifine didn't have to develop the game, it had a stepping stone of what was there and could focus entirely on optimising that and had all the time in the world to waste on that single aspect. Meanwhile Mojang has to run the studio, support the game, think of what to do next for their game along their vision, develop those next ideas etc. They have a thousand more things to worry about than a modder. The GPU not being maxed out means that it doesn't get sent more data to process, which means the CPU is one way or another at its limit of what it can send to the GPU. If there are unused cores on the CPU, then you go full circle back to the original question whether CPU power is the actual bottleneck and if/how more parallellisation is even possible.

 

7 hours ago, NvidiaFirePro6900XXTX3DPRO said:

But generating chunks and moving in the world are completely different tasks. Most games do this. No matter how bad your CPU is and how complex the map is, even if your CPU is pegged at 100% usage, it still won't stutter when loading the other parts of the map. Because they are completely separated from the thread that the other part of the game is running on. For example GTA 5, if your CPU is extremely trash, and if you travel too fast, the textures won't even load in time and map can start disappearing because it's not being loaded in real time. This should not be possible if the whole game was running on a single thread, where the game should stutter so much that it would just stop before the map was totally visible and clearly loaded.

but is that what you want? Do you want the game to break and softlock/kill you because you fell through the world,? Or do you want the game to allow itself to catch up and make sure things load properly? This is exactly one of those core problems. In one case people will complain about pop ins and the world breaking, in the other case people will complain about poor performance and stutteryness. There is no win and both are a crap experience only with the former resulting in a few more laughs maybe. And if the CPU is maxed out at 100% then it is doing all it can to try and keep up with the demand. If in the worst case that demand of game logic, loading, rendering etc exceed what the CPu can do then threaded or not it will stutter or freeze anyway.

Minecraft Java Edition is one of the most horribly optimized games in the gaming history. At least the bedrock edition is what Minecraft really should have been if it were not Microsoft just ruining it. Minecraft was built in the era when PCs only had 2 to 4 cores at most, so multithreading much of a thing back then. I have noticed one thing is that Minecraft's processing pipeline is very much on the same thread. But what if you could use 1 or more threads for generating chunks, and other threads for actually interacting in the game? Just like how it mostly happens in other games? right? Or no? I am not sure, I am actually asking.

Microsoft owns my soul.

 

Also, Dell is evil, but HP kinda nice.

Link to comment
Share on other sites

Link to post
Share on other sites

"just make it multithreaded" is a very common trope about how software "could be made to perform better"

 

Multithreaded software is hugely complex to achieve, at least in terms of "make game run faster".

 

You'll probably find the Minecraft Java does use more than one core at a time, just not for a single gameplay loop.

Link to comment
Share on other sites

Link to post
Share on other sites

it is not the most horribly optimized game in history. it's extremely far from that trophy....

 

it's the most played game ever tho so it might seem that way.

 

to answer the title yes you totally can.

check out fabric mod loader and these

https://modrinth.com/user/jellysquid3

 

this isn't a multi threading fix actually. it uses C++ libraries to replace the java code paths because it's much much faster.

similar to numpy with python. java is pretty much a VM/emulator weird combo, it does not create the best performance.

Primary System

  • CPU
    Ryzen R6 5700X
  • Motherboard
    MSI B350M mortar arctic
  • RAM
    32GB Corsair RGB 3600MT/s CAS18
  • GPU
    Zotac RTX 3070 OC
  • Case
    kind of a mess
  • Storage
    WD black NVMe SSD 500GB & 1TB samsung Sata ssd & x 1TB WD blue & x 3TB Seagate
  • PSU
    corsair RM750X white
  • Display(s)
    1440p 21:9 100Hz
Link to comment
Share on other sites

Link to post
Share on other sites

I agree with using Fabric, it can speed up the game a fair bit. And yes, the main game loop runs on a single thread. It's VERY far from being poorly optimized though.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, whispous said:

Multithreaded software is hugely complex to achieve, at least in terms of "make game run faster".

This. Making things multi-threaded can take a lot of effort and not all things can easily be split into independent chunks of work. An engine needs to be designed around this concept, so adding it after the fact is virtually impossible.

 

8 hours ago, SquintyG33Rs said:

this isn't a multi threading fix actually. it uses C++ libraries to replace the java code paths because it's much much faster. similar to numpy with python. java is pretty much a VM/emulator weird combo, it does not create the best performance.

The reason for bad performance is usually code that wasn't written with performance in mind. The Java VM does compile Bytecode to native code on the fly and can get very close to native performance.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Eigenvektor said:

This. Making things multi-threaded can take a lot of effort and not all things can easily be split into independent chunks of work. An engine needs to be designed around this concept, so adding it after the fact is virtually impossible.

 

The reason for bad performance is usually code that wasn't written with performance in mind. The Java VM does compile Bytecode to native code on the fly and can get very close to native performance.

i'm sorry but native performance doesn't mean anything in this context.

 

it's something you use to compare a console vs it's emulator. when you're still improving the software architecture and rendering pipeline to match how it runs and looks on the original hardware.

java doesn't exist outside of it's jvm state (I mean maybe somewhere in oracle they made special sauce silicon architecture that just runs native java compiled binaries but we mere mortals do not ever see that) java is designed to be portable. the point of it (and the reason Minecraft was built on it originally since it was conceived as a relatively simple game) was that the developer doesn't have to even think about what hardware the user has in the end. java takes care of that part. so if someone sees a bug everyone has the same bug no weird hunting down edge cases for every different version of windows. and it's also why minecraft java edition runs on Raspberry Pi, mac, linux, your phone, your fridge... since before it even launched.

now I do agree java isn't slow in the grand scheme of things. but python is considered very fast running in general... it still gets stomped when you copy the same code into C++ code (something ridiculous like 100x faster at least) and that's before you start getting clever and optimizing. 

 

which is why when you dig into it when you use numpy for maths stuff in python instead of python native types it's a billion times faster. it's a bunch of C compiled functions you're sticking your arrays in for accelerated computing.

Primary System

  • CPU
    Ryzen R6 5700X
  • Motherboard
    MSI B350M mortar arctic
  • RAM
    32GB Corsair RGB 3600MT/s CAS18
  • GPU
    Zotac RTX 3070 OC
  • Case
    kind of a mess
  • Storage
    WD black NVMe SSD 500GB & 1TB samsung Sata ssd & x 1TB WD blue & x 3TB Seagate
  • PSU
    corsair RM750X white
  • Display(s)
    1440p 21:9 100Hz
Link to comment
Share on other sites

Link to post
Share on other sites

So to answer the general question, yes Minecraft could handle larger servers and such if they went multi-threaded.

 

Creating multi-threaded applications isn't necessarily as easy though as some people think it is, and to an extent you effectively sacrifice some performance with the expectation that doing tasks in parallel will have overall less time...but sometimes it isn't the case.  Then if you have order dependent things you might not get any additional performance

 

To express the above as an example.  Sally has a list of tasks to do, one day she decides to distribute out the jobs.  So she takes her to-do list and splits it into two, and gives one task to Bob and another to Alice.  Now if lets say the tasks were drink a glass of water, and flush the toilet both tasks would take a trivial amount of time...so Sally wasted more time writing the to-do list than it would be just to give the list to Alice initially.

 

Or another example, the tasks are

Task1) Drive to the store, pickup apples, bake a pie

Task 2) Fill up with gas, bake cake

 

Both tasks really rely on each other's objects.  If someone takes the car the other can't fill up gas.  If someone is baking a cake the other needs to wait to bake the pie...it then becomes is the time saved more than the time it took to coordinate and the effort involved

 

Games where there is a deterministic type of physics can make parallelism more complicated

Consider two falling objects, and the physics calculation being done in parallel.

If they are far enough apart it's unlikely doing object A over object B movement will matter as much

If they are close and potentially colliding it matters greatly if you process object A or object B first in terms of collision.

 

When multithreaded applications first started to come into play the complexities of splitting out certain calculations into other threads was at least one reason why it was lets say split between physics and AI processing.  While AI relies on the physics, it could be done with minimal interaction.

 

Running things in parallel is great when you can bifurcate the tasks to not include the same elements.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, wanderingfool2 said:

So to answer the general question, yes Minecraft could handle larger servers and such if they went multi-threaded.

 

Creating multi-threaded applications isn't necessarily as easy though as some people think it is, and to an extent you effectively sacrifice some performance with the expectation that doing tasks in parallel will have overall less time...but sometimes it isn't the case.  Then if you have order dependent things you might not get any additional performance

bruh minecraft IS multithreaded.
image.png.a55cc1dc85e1df3dab75938e4d26fa72.png literally spelled out in an option in the game.
 just not to the point where it can saturate 8 cores or what ever you're imagining. it's inherently not a loaded game. you start seeing the limits only when you have tons of mods on top. Mojang isn't thinking about the massive mod packs when they're testing. because which one would they even choose? and what happens when it's the mods themselves that are programmed poorly not vanilla minecraft? Vanilla on iGPU from 2012 has no problems running smoothly with smooth lighting and fancy leaves turned on and 16x chunks view distance. it's complicated... I don't know how much time they spend on maintenance VS new features. maybe the java team is the one who comes up with everything and the other parts that make all the other versions of the game only copy the homework in. maybe they all get to pitch ideas. but since the non java version exists they don't have a strong reason to stress about min maxing the game to run a core 2 duo with all the most loaded world at 500fps.

it's stable and what they build is objectively smooth.

 

their biggest performance issue isn't threading related... it's chunk loading. which is disk IO. not CPU calculations. and it's the first thing optifine tweaked with lazy loading and background loading of extra chunks asynchronously when you stand still.

it might be a problem with the data format they are using for the chunks. but anyways this is an exponential problem with more view distance because longer view = more chunks to load at once every time you move.

Primary System

  • CPU
    Ryzen R6 5700X
  • Motherboard
    MSI B350M mortar arctic
  • RAM
    32GB Corsair RGB 3600MT/s CAS18
  • GPU
    Zotac RTX 3070 OC
  • Case
    kind of a mess
  • Storage
    WD black NVMe SSD 500GB & 1TB samsung Sata ssd & x 1TB WD blue & x 3TB Seagate
  • PSU
    corsair RM750X white
  • Display(s)
    1440p 21:9 100Hz
Link to comment
Share on other sites

Link to post
Share on other sites

@whispous @SquintyG33Rs @waddledee @Eigenvektor @wanderingfool2


@whispous

16 hours ago, whispous said:

You'll probably find the Minecraft Java does use more than one core at a time, just not for a single gameplay loop.

It might be using all of my cores when generating chunks, but that is the physical multithreading. I am talking about the multithreading that is software sided. Windows just scales all the workload to different cores, but it's the program's job to be better optimized and assign different threads for different jobs.

@wanderingfool2- But generating chunks and moving in the world are completely different tasks. Most games do this. No matter how bad your CPU is and how complex the map is, even if your CPU is pegged at 100% usage, it still won't stutter when loading the other parts of the map. Because they are completely separated from the thread that the other part of the game is running on. For example GTA 5, if your CPU is extremely trash, and if you travel too fast, the textures won't even load in time and map can start disappearing because it's not being loaded in real time. This should not be possible if the whole game was running on a single thread, where the game should stutter so much that it would just stop before the map was totally visible and clearly loaded.

 

@SquintyG33Rs

7 hours ago, SquintyG33Rs said:

bruh minecraft IS multithreaded.

I don't know what that threaded part means. I mean, there is still no "multi-threaded" option, right?

 

7 hours ago, SquintyG33Rs said:

 it's chunk loading. which is disk IO. not CPU calculations.

Totally disagreed. I have an SSD and it has no problems loading the chunks. The CPU calculations are the main bottleneck.

 

 

 

One more part that shows that multithreading can definitely improve performance is that if I play my world on a server, there is very little or no lag. This is because there is some part of actual multithreading happening here, except that fact that the other thread is running on the server, not your local machine.

 

I've always wondered, why Minecraft was even made in Java in the first place? Java is definitely a powerful language, but's not made to be used in making games. Maybe because Notch didn't know to program in C++? Or maybe because was supposed to be very lightweight, so they didn't bother, but that was still very bard for the long run. I mean it did start off like an of side project by Notch. And yes, even if you say that is because to be hardware independent, what are the percentage of people who use Macs and Linux? No where close to Windows. And porting the game to different systems wouldn't take much effort.
 

Also, we have optifine from over 10 years, but Mojang has never stepped and done it themselves. Although I don't know if it was because of legal issues, being lazy, not caring and focusing on updates, or just not being able to achieve it. I can understand about chunk loading lag, what about the general optimization of fps? Even when everything is stable, the GPU is under 50% and fps is totally trashed.

This might not seem like a huge subject, but I have come across a lot of videos ranting about the performance, and the fact that after every update, the performance get's worse and worse. Not kidding, every update it takes additional amount of time to generate new worlds. Bedrock edition takes the W in this only scenario.

Also, no game is kept developing over 10 years, without making internal changes to match the computing features of the current age.

Microsoft owns my soul.

 

Also, Dell is evil, but HP kinda nice.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, SquintyG33Rs said:

i'm sorry but native performance doesn't mean anything in this context.

Native in this context means code directly compiled to machine code for the specific CPU architecture (e.g. x86_64), as opposed to Java Bytecode which gets translated to architecture specific machine code at runtime.

 

Native performance in this context means performance of a binary containing machine code vs performance of a Java binary that gets compiled on the fly.

 

11 hours ago, SquintyG33Rs said:

now I do agree java isn't slow in the grand scheme of things. but python is considered very fast running in general... it still gets stomped when you copy the same code into C++ code (something ridiculous like 100x faster at least) and that's before you start getting clever and optimizing. 

My point above is that the same thing does not happen with Java.

 

If you translate Java to C++ code with no additional optimizations, the performance of the native binary will be within margin of error of the Java Bytecode.

 

You may be able to add some additional optimizations to C++ that aren't possible in Java (because you can do manual memory management). But if your Java code isn't performant, there are likely a ton of improvements you can do before moving to C++ makes sense.

 

You really can't compare Java to Python in this context, which is an interpreted rather than compiled language that also has no direct support for multi-threading (GIL)

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, NvidiaFirePro6900XXTX3DPRO said:

porting the game to different systems wouldn't take much effort.

It might seem like that from the outside, but I can tell you that this is absolutely false. GPU intensive applications are the most difficult to port between operating systems. Remember that modern Macs have a different architecture than PCs and significantly different graphics APIs. Java was chosen because it offloads lots of the "porting" effort to the JRE. 

 

To bring it back to your original question, if you slightly tweak it then you'll see that you've more or less answered it.

 

You started with "Can you improve Minecraft's performance by better multi-threading?" but what you really meant was more like "Can you improve Minecraft Java Edition's performance by better multi-threading?". The answer was basically "No. It's already multi-threaded". So then the follow up would be: "How can you improve the performance than"? And Bedrock Edition, as you point out, is basically the answer to that question: re-write the game but not in Java.

 

Minecraft Bedrock Edition is primarily a way to make more money by charging for mods and servers. But it's also an attempt to improve the performance, and take better advantage of the GPU hardware available by doing stuff like Ray Tracing which can be done in Java, but are much easier in C++ targeting Windows only.

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, SquintyG33Rs said:

bruh minecraft IS multithreaded.

The main physics thread from my understanding isn't multithreaded, and for larger servers the object interactions can greatly slow down the server.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, NvidiaFirePro6900XXTX3DPRO said:

It might be using all of my cores when generating chunks, but that is the physical multithreading. I am talking about the multithreading that is software sided.

I think you're confusing a few things here. On the physical side you have a CPU with multiple cores. The operating system's task scheduler can distribute processes and threads across these cores. For an app to use multiple cores at once, it must have more than one thread (or process) and the task scheduler must allocate these to more than one physical core (and there's no affinity mask binding threads to a specific core only).

 

If Minecraft is using all of your cores when generating chunks, it must be using multiple threads. That is very much "software sided multi-threading". If an application only has a single thread, there's nothing the OS or hardware can do to run it on multiple cores.

 

Generating chunks is likely very easy to parallelize. If you have n cores, split the world space into n parts, spawn a thread for each and let it generate its part of the world. Then do some magic to fit these parts together.

 

8 hours ago, NvidiaFirePro6900XXTX3DPRO said:

Windows just scales all the workload to different cores, but it's the program's job to be better optimized and assign different threads for different jobs.

No. An app generally starts as many threads as it needs/wants (can be based on number of cores, but doesn't have to be). It's the task schedulers job to distribute these threads across cores, to prioritize threads, etc. Your typical app does not and should not care which cores its threads are assigned to.

 

8 hours ago, NvidiaFirePro6900XXTX3DPRO said:

I've always wondered, why Minecraft was even made in Java in the first place?

Because Java apps can easily run on Windows, Linux, macOS, … without additional effort. Which is also why the engine is using OpenGL – This graphics API is available pretty much anywhere.

 

8 hours ago, NvidiaFirePro6900XXTX3DPRO said:

Java is definitely a powerful language, but's not made to be used in making games.

C++ was also not made with games in mind. That doesn't mean you can't use both of these languages to make games. The primary obstacle would be that most game specific APIs/libraries don't include Java bindings, which is a point in favor of using C++.

 

You can also do optimizations when it comes to memory management that aren't possible (or at least much more difficult) in a language with managed memory. The downside would be that it requires a lot more manual work to do manual memory management and is easy to mess up.

 

The primary reason the Bedrock edition is more performant is because it only runs on Windows and as such can be optimized for that one platform. I'm sure you could port the Java edition to Vulkan and then do stuff like multi-threaded render threads to increase performance, but that is far from a trivial task.

 

Of course that assumes that moving in the world can be parallelized to begin with, without requiring a ton of cross-thread synchronization which might quickly kill any performance benefit from going with multiple threads.

 

8 hours ago, NvidiaFirePro6900XXTX3DPRO said:

Maybe because Notch didn't know to program in C++? Or maybe because was supposed to be very lightweight, so they didn't bother, but that was still very bard for the long run. I mean it did start off like an of side project by Notch. And yes, even if you say that is because to be hardware independent, what are the percentage of people who use Macs and Linux? No where close to Windows. And porting the game to different systems wouldn't take much effort.

Porting a game from DirectX to e.g. OpenGL/Vulkan (Linux) or Metal (macOS) is definitely a lot of work. Java means you develop once, and you have a server that can run on Windows and Linux. He could probably have created a game client for Windows only and then created a separate server in Java, to run on Linux systems, but that would've been more effort. But as you said, it was a side project and maybe he was personally using Linux and didn't fancy creating a Windows only game.

 

8 hours ago, NvidiaFirePro6900XXTX3DPRO said:

Also, we have optifine from over 10 years, but Mojang has never stepped and done it themselves. Although I don't know if it was because of legal issues, being lazy, not caring and focusing on updates, or just not being able to achieve it. I can understand about chunk loading lag, what about the general optimization of fps? Even when everything is stable, the GPU is under 50% and fps is totally trashed.

You have to look at this through a corporate lens: Do these optimizations generate additional revenue? No. Then it's development effort with no payoff.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, NvidiaFirePro6900XXTX3DPRO said:

It might be using all of my cores when generating chunks, but that is the physical multithreading. I am talking about the multithreading that is software sided. Windows just scales all the workload to different cores, but it's the program's job to be better optimized and assign different threads for different jobs.

That is really hard even though it sounds deceptively simple. You first have to identify whether multithreading is the actual bottleneck. Then you need to investigate if you even can paralellise the workload in the first place and how. The latter is often nontrivial to do properly because you need to think about memory usage, disk I/O, overhead from the threads etc. as well. Regarding the assignment of jobs to cores, in my experience with multithreaded stuff, generally if you think you can schedule smarter than the system, think again unless you know of specific optimisations for your problem. If you do it wrong and the threads end up being in each other's way too much it doesn't help in the best case and makes it slower than not multithreading at all in the worst case.

 

While a single problem may seem parallelisable, like maybe chunk loading, with complex things like games there is the other problem that that one problem is not the only thing that can multithread or wants your cores. Even in a utopia where everything multithreads perfectly and your CPU would be constantly pegged at 100% no matter what, you would still have the scheduling problem of who gets how much and when to do what.

 

6 hours ago, NvidiaFirePro6900XXTX3DPRO said:

I don't know what that threaded part means. I mean, there is still no "multi-threaded" option, right?

Usually threaded is synonymous with multithreaded in my experience, I would assume so here as well.

6 hours ago, NvidiaFirePro6900XXTX3DPRO said:

One more part that shows that multithreading can definitely improve performance is that if I play my world on a server, there is very little or no lag. This is because there is some part of actual multithreading happening here, except that fact that the other thread is running on the server, not your local machine.

Since you and the server are two separate machines it is a bit different from multithreading and does not immediately proof that more multithreading would solve local performance issues. You'd have to dissect what the server does and what the client does and then assess what kind of performance impact it would have if you machine would have to do everything.

6 hours ago, NvidiaFirePro6900XXTX3DPRO said:

And porting the game to different systems wouldn't take much effort.

Not really true. Maybe a bit of an extreme example, but just look at console games that come to PC, even the recent ones, and the absolute garbage performance that they sometimes (dare I say often) have. Compare that to something like DOOM 2016 or Eternal that (proverbially) run on just about anything and you can see that doing it right takes effort and knowledge of the system you are porting it to and that you can't always just plonk it down somewhere else and expect it to work with minimal updates or changes.

7 hours ago, NvidiaFirePro6900XXTX3DPRO said:

Also, we have optifine from over 10 years, but Mojang has never stepped and done it themselves. Although I don't know if it was because of legal issues, being lazy, not caring and focusing on updates, or just not being able to achieve it. I can understand about chunk loading lag, what about the general optimization of fps? Even when everything is stable, the GPU is under 50% and fps is totally trashed.

Profit-driven incentives aside, there is another side to this I think. A modder or small modding team can focus entirely on the single thing that that mod is supposed to do. Optifine didn't have to develop the game, it had a stepping stone of what was there and could focus entirely on optimising that and had all the time in the world to waste on that single aspect. Meanwhile Mojang has to run the studio, support the game, think of what to do next for their game along their vision, develop those next ideas etc. They have a thousand more things to worry about than a modder. The GPU not being maxed out means that it doesn't get sent more data to process, which means the CPU is one way or another at its limit of what it can send to the GPU. If there are unused cores on the CPU, then you go full circle back to the original question whether CPU power is the actual bottleneck and if/how more parallellisation is even possible.

 

7 hours ago, NvidiaFirePro6900XXTX3DPRO said:

But generating chunks and moving in the world are completely different tasks. Most games do this. No matter how bad your CPU is and how complex the map is, even if your CPU is pegged at 100% usage, it still won't stutter when loading the other parts of the map. Because they are completely separated from the thread that the other part of the game is running on. For example GTA 5, if your CPU is extremely trash, and if you travel too fast, the textures won't even load in time and map can start disappearing because it's not being loaded in real time. This should not be possible if the whole game was running on a single thread, where the game should stutter so much that it would just stop before the map was totally visible and clearly loaded.

but is that what you want? Do you want the game to break and softlock/kill you because you fell through the world,? Or do you want the game to allow itself to catch up and make sure things load properly? This is exactly one of those core problems. In one case people will complain about pop ins and the world breaking, in the other case people will complain about poor performance and stutteryness. There is no win and both are a crap experience only with the former resulting in a few more laughs maybe. And if the CPU is maxed out at 100% then it is doing all it can to try and keep up with the demand. If in the worst case that demand of game logic, loading, rendering etc exceed what the CPu can do then threaded or not it will stutter or freeze anyway.

Crystal: CPU: i7 7700K | Motherboard: Asus ROG Strix Z270F | RAM: GSkill 16 GB@3200MHz | GPU: Nvidia GTX 1080 Ti FE | Case: Corsair Crystal 570X (black) | PSU: EVGA Supernova G2 1000W | Monitor: Asus VG248QE 24"

Laptop: Dell XPS 13 9370 | CPU: i5 10510U | RAM: 16 GB

Server: CPU: i5 4690k | RAM: 16 GB | Case: Corsair Graphite 760T White | Storage: 19 TB

Link to comment
Share on other sites

Link to post
Share on other sites

A lot of the code simply can't be multithreaded because they just ain't parallel in nature and for ones that can be parallel, it is a real pain in the ass to write and debug. Outside of games, it is often times easier to spin up process rather than threads, at least you won't need to deal with race conditions and semaphores as much. 

Sudo make me a sandwich 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Eigenvektor said:

If you translate Java to C++ code with no additional optimizations, the performance of the native binary will be within margin of error of the Java Bytecode.

 

You may be able to add some additional optimizations to C++ that aren't possible in Java (because you can do manual memory management). But if your Java code isn't performant, there are likely a ton of improvements you can do before moving to C++ makes sense.

 

try it... It's not as fast... it's not a comment on the speed of doing hello world.

if you write the same WES (weak encryption system) Differential Crypanalysis S-Box cracking algorithm in java python and C++ you do not get the results in the same timeframe at all. python will take 3 days to get you a partial key out and C++ does it in 5min. and java does not do it in 5min, let me tell you.

 

 

14 hours ago, maplepants said:

 Java was chosen because it offloads lots of the "porting" effort to the JRE. 

literally 100% of the porting is gone with java. they don't even have to consider it as long as the machine has enough ram to load the program it will run.

 

16 hours ago, NvidiaFirePro6900XXTX3DPRO said:

Totally disagreed. I have an SSD and it has no problems loading the chunks. The CPU calculations are the main bottleneck.

unfortunately it doesn't really work that way. modern SSD are incredibly fast yes but it doesn't mean the CPU is the bottleneck. otherwise we wouldn't even have RAM installed in our PCs. we'd just run everything directly off the drive because it's 1TB for a 100$ which is so much cheaper than 32GB for 100$. the fact that that reality isn't true should really make it obvious how asset loading is an IO bottleneck not a computational one. even with Direct memory access we're talking about bringing down 30s of loading on sata down to 2s of loading on nvme. 2s is very very short but noticeable. and asking for 100chunks at ounce, even though they are small individually, clogs the queue.

Primary System

  • CPU
    Ryzen R6 5700X
  • Motherboard
    MSI B350M mortar arctic
  • RAM
    32GB Corsair RGB 3600MT/s CAS18
  • GPU
    Zotac RTX 3070 OC
  • Case
    kind of a mess
  • Storage
    WD black NVMe SSD 500GB & 1TB samsung Sata ssd & x 1TB WD blue & x 3TB Seagate
  • PSU
    corsair RM750X white
  • Display(s)
    1440p 21:9 100Hz
Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, SquintyG33Rs said:

if you write the same WES (weak encryption system) Differential Crypanalysis S-Box cracking algorithm in java python and C++ you do not get the results in the same timeframe at all.

I'm sure you can find tons of examples where C++ beats Java. Doesn't mean it is slower for the large majority of tasks. I assume this algorithm uses a ton of array and bit manipulation like many encryption related tasks. You're unlikely to need similar techniques in a game like Minecraft.

 

1 hour ago, SquintyG33Rs said:

python will take 3 days to get you a partial key out and C++ does it in 5min. and java does not do it in 5min, let me tell you.

And how long does it take? There's a big difference between 3 days and 5 minutes. I assume Java is somewhere in the 5:30 minute range, or you'd have been honest enough to mention it, rather than coming up with a misleading Python comparison, once again, in a discussion about the performance of Java.

 

If you have an open source C++ implementation I can translate in an hour or two, I'll gladly give it a try.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, SquintyG33Rs said:

literally 100% of the porting is gone with java. they don't even have to consider it as long as the machine has enough ram to load the program it will run.

Makes sense. I've never done any heavy duty GUI work with Java. Just command line tools, web services, and some basic GUIs. But this was a long time ago, and at that time if you were okay with the GUI looking and feeling like garbage everywhere you didn't to do anything. But if you wanted it to feel at home on Mac OS X and on Windows you had to do some OS specific stuff that users expect. 

 

I guess because games never have to "feel at home" on their target OS, they just need to run, it makes sense that this isn't needed. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/3/2023 at 10:19 PM, NvidiaFirePro6900XXTX3DPRO said:

Minecraft Java Edition is one of the most horribly optimized games in the gaming history. At least the bedrock edition is what Minecraft really should have been if it were not Microsoft just ruining it. Minecraft was built in the era when PCs only had 2 to 4 cores at most, so multithreading much of a thing back then. I have noticed one thing is that Minecraft's processing pipeline is very much on the same thread. But what if you could use 1 or more threads for generating chunks, and other threads for actually interacting in the game? Just like how it mostly happens in other games? right? Or no? I am not sure, I am actually asking.

you could assign more ram to the game 

Message me on discord (bread8669) for more help 

 

Current parts list

CPU: R5 5600 CPU Cooler: Stock

Mobo: Asrock B550M-ITX/ac

RAM: Vengeance LPX 2x8GB 3200mhz Cl16

SSD: P5 Plus 500GB Secondary SSD: Kingston A400 960GB

GPU: MSI RTX 3060 Gaming X

Fans: 1x Noctua NF-P12 Redux, 1x Arctic P12, 1x Corsair LL120

PSU: NZXT SP-650M SFX-L PSU from H1

Monitor: Samsung WQHD 34 inch and 43 inch TV

Mouse: Logitech G203

Keyboard: Rii membrane keyboard

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

Damn this space can fit a 4090 (just kidding)

Link to comment
Share on other sites

Link to post
Share on other sites

Portability with java isnt neccesarily true. everything inside oracles jdk are pretty much compile once and run everytime and everywhere but there are certain native os specifics which can break cross platform compatibility, even for things like java.

 

The java native interfaces is an obvious one, another is executing console command, e,g. Runtime.getRuntime().exec and ect for obvious reason. of course there are also third party libraries outside of jdk that needed native binary. one example is JavaFx, ever since oracle remvove it from JDK, you need to download the specific JavaFX binaries for your operating system which is inconvenient unlike java swing in which user just needs to install the right jvm for their os and it will pretty much run like on any other operating systems. The javafx binaries on linux actually has lots of bindings to other gui libraries such as the GTK. This means you either have to exlcude Java fx from your jar file and rely on users to install the right javafx for their operating system beforehand or you will need to include the right version inside each version of your executables for each operating system in the same manner as if you are crosscompiling executables for each target operating system. although if you use tools like jpackage and jlink, this process can entirely be automated, as long as you config it properly of course. 

Edited by wasab
Edit, quote the the wrong person

Sudo make me a sandwich 

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, filpo said:

you could assign more ram to the game 

That is the most noobest tip you could give. It is not even optimization, it just giving MC to use more resources. That is completely different from what optimization means. But yes, I have given it more.

Microsoft owns my soul.

 

Also, Dell is evil, but HP kinda nice.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Sakuriru said:

Have you tried installing OptiFine or any other Minecraft mods that address its performance issues?

now that is an even more noob statement. If you would have analyzed my posts, I know a lot more than just installing OptiFine.

Microsoft owns my soul.

 

Also, Dell is evil, but HP kinda nice.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/4/2023 at 8:02 PM, SquintyG33Rs said:

try it... It's not as fast... it's not a comment on the speed of doing hello world.

if you write the same WES (weak encryption system) Differential Crypanalysis S-Box cracking algorithm in java python and C++ you do not get the results in the same timeframe at all. python will take 3 days to get you a partial key out and C++ does it in 5min. and java does not do it in 5min, let me tell you.

 

On 4/4/2023 at 9:38 PM, Eigenvektor said:

And how long does it take? There's a big difference between 3 days and 5 minutes. I assume Java is somewhere in the 5:30 minute range, or you'd have been honest enough to mention it, rather than coming up with a misleading Python comparison, once again, in a discussion about the performance of Java.

 

If you have an open source C++ implementation I can translate in an hour or two, I'll gladly give it a try.

The way I always look at optimizations/speed of an implementation is how much affect does it really have when all things are said and done.  Tasks built to run once or twice per program run don't need to be optimized (unless they are extremely slow)...with a lot of algorithms Java will handle the task nicely and ultimately the algorithm you choose/design choices you make will likely have a larger impact on your code than comparing C/C++ to Java (which is why I agree with what @Eigenvektor said)

 

Take the following prime sieve (same algorithm different languages)

Logic dictates that assembly should become the fastest, yet the ability to implement the algo hinders its' ability.  Yet you have Java and C++ that are within 10% performance of each other (during earlier renditions of this Java actually was outperforming C++ iirc until people highly optimized both).  I think in the case of Minecraft the fact it uses Java really doesn't hinder it, rather it's just the general design choices that hinders it (but it's almost always a tradeoff of work/thought required vs speed)

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

Regarding the original topic, Amdahl's law is applicable here.  Naively one might think that if you throw more cores at a problem, it will get faster proportional to the number of cores. This is true for a very particular class of problems (that are called embarrassingly parallel), but isn't generally true.


How fast a program will get with parallelism is dependent on how much of it you can parallelize. If there's 10% of the code that must run in a single thread, no matter how many cores you throw at the problem, even a billion core CPU from the future, it simply will not get faster than 10x (which is 1/10% = 1/0.1).

 

Games programming in particular is chock full of such critical segments that refuse to parallelize. At a lower level, interaction between threads isn't entirely free either. It introduces synchronization mechanisms or at least memory barriers that are somewhat costly. It's entirely possible to refactor a single threaded program into multiple threads, and then find it doesn't run much faster or even slower than the single-core program.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Marginalia said:

Regarding the original topic, Amdahl's law is applicable here.  Naively one might think that if you throw more cores at a problem, it will get faster proportional to the number of cores. This is true for a very particular class of problems (that are called embarrassingly parallel), but isn't generally true.


How fast a program will get with parallelism is dependent on how much of it you can parallelize. If there's 10% of the code that must run in a single thread, no matter how many cores you throw at the problem, even a billion core CPU from the future, it simply will not get faster than 10x (which is 1/10% = 1/0.1).

 

Games programming in particular is chock full of such critical segments that refuse to parallelize. At a lower level, interaction between threads isn't entirely free either. It introduces synchronization mechanisms or at least memory barriers that are somewhat costly. It's entirely possible to refactor a single threaded program into multiple threads, and then find it doesn't run much faster or even slower than the single-core program.

That's exactly also why we get more gain for day to day business application choosing IPC x Speed over # Cores.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×