Jump to content

C# vs C performance

Art Vandelay

The other day, I was bored and decided to see whether C# or C was faster when running over a long loop. 

 

Times to run through various algorithms tended to be significantly worse in C# than in C. A prime number algorithm I found, that didn't use anything from code libraries other than timing functions, took almost twice as much time as the loop in C. I was compiling with TCC, which isn't the best optimizing compiler, so the difference seems strangely large.

 

Even with the latency that JIT compiling creates, on a small loop that runs for 10 seconds, it shouldn't make too much of a difference. When the code is compiled into MSIL, it should be doing the same kind of optimizations on it. What could be causing this? Poor cache performance because C# is a memory hog?

Link to comment
Share on other sites

Link to post
Share on other sites

C is lower level code that is meant for mathematics calculations. Very fast and efficient, but few features.

C# is higher level and meant more for graphics, not math.

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

C is lower level code that is meant for mathematics calculations. Very fast and efficient, but few features.

C# is higher level and meant more for graphics, not math.

 

Well not not really, C# is not really meant for graphics, at least for 3D. That's C/C++ domain, mostly.

 

I would agree if we're talking about 2D/GUI.

 

I think it's more accurate to say that if you want speed you want C/C++, if you want language features, you want C#.

C# is by nature slower, which really matters in 3D graphics.

Link to comment
Share on other sites

Link to post
Share on other sites

C gives you much more access to things like memory management making it much more suited in situations where performance and low level access are required e.g. drivers.

C# is intended to be a high level language that is more maintainable and more human readable, often at the expense of performance. Who cares if opening a tab is 1ms faster when using C over C# if you can't maintain the UI at all, whereas 1ms delay with a driver using C# over C is probably unacceptable.

Similarly, C# is much more appropriate for a large batch processing application (e.g. for use in a bank) where performance is largely irrelevant but you want a team of programmers to be able to work on it.

 

Different use cases, performance isn't always the most important thing.

Link to comment
Share on other sites

Link to post
Share on other sites

C doesn't really do anything extra for you like C# does. Many languages that came after C have built extra things, like garbage collection, into the languages to make it faster for programmers. This comes at the cost of execution speed. JIT compiling has improved their speed, but it's still doing more than a language like C. It's also possible that because C has been around for longer and has less features to worry about, it has better compile time optimizations.

 

Execution speed isn't everything, but when you need it, C is near the top.

 

edit: After a little further searching, it does indeed seem like C# can, at least in some cases, be faster than C. Although that still seems debatable. Example

Link to comment
Share on other sites

Link to post
Share on other sites

C gives you much more access to things like memory management making it much more suited in situations where performance and low level access are required e.g. drivers.

C# is intended to be a high level language that is more maintainable and more human readable, often at the expense of performance. Who cares if opening a tab is 1ms faster when using C over C# if you can't maintain the UI at all, whereas 1ms delay with a driver using C# over C is probably unacceptable.

Similarly, C# is much more appropriate for a large batch processing application (e.g. for use in a bank) where performance is largely irrelevant but you want a team of programmers to be able to work on it.

 

Different use cases, performance isn't always the most important thing.

I'm just wondering specifically why the performance of C# is so drastically worse. Other than the compile delay, it theoretically should be just as fast or even faster.

 

C doesn't really do anything extra for you like C# does. Many languages that came after C have built extra things, like garbage collection, into the languages to make it faster for programmers. This comes at the cost of execution speed. JIT compiling has improved their speed, but it's still doing more than a language like C. It's also possible that because C has been around for longer and has less features to worry about, it has better compile time optimizations.

 

Execution speed isn't everything, but when you need it, C is near the top.

From everything I've read, it seems like garbage collection is actually fairly efficient. I wasn't even using garbage collection in some of tests where C# performed worse.

 

With JIT, the code in a long running loop should be compiled to machine code, just like C is. At the very least, when doing simple operations, it shouldn't have that much more to do than C.

 

Also, I was not using a good compiler for optimizations. TCC was not designed with heavy optimization in mind. 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm just wondering specifically why the performance of C# is so drastically worse. Other than the compile delay, it theoretically should be just as fast or even faster.

 

From everything I've read, it seems like garbage collection is actually fairly efficient. I wasn't even using garbage collection in some of tests where C# performed worse.

 

With JIT, the code in a long running loop should be compiled to machine code, just like C is. At the very least, when doing simple operations, it shouldn't have that much more to do than C.

 

Also, I was not using a good compiler for optimizations. TCC was not designed with heavy optimization in mind. 

 

Maybe listing the code and how you're testing will help someone provide a more accurate answer. It's also possible your tests aren't being done correctly to give accurate results.

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe listing the code and how you're testing will help someone provide a more accurate answer. It's also possible your tests aren't being done correctly to give accurate results.

 

Well, the only one that I haven't manage to lose in my random garbage folder was this one:

 

int n = 1000000;
    int count=0;
    long a = 2;
    while(count<n)
    {
        long b = 2;
        int prime = 1;// to check if found a prime
        while(b * b <= a)
        {
            if(a % b == 0)
            {
                prime = 0;
                break;
            }
            b++;
        }
        if(prime > 0)
        count++;
        a++;
    }
 
looks the same in both languages, but runs a lot worse in C#. took about double the time.
Link to comment
Share on other sites

Link to post
Share on other sites

 

Well, the only one that I haven't manage to lose in my random garbage folder was this one:

 

int n = 1000000;
    int count=0;
    long a = 2;
    while(count<n)
    {
        long b = 2;
        int prime = 1;// to check if found a prime
        while(b * b <= a)
        {
            if(a % b == 0)
            {
                prime = 0;
                break;
            }
            b++;
        }
        if(prime > 0)
        count++;
        a++;
    }
 
looks the same in both languages, but runs a lot worse in C#. took about double the time.

 

 

The int datatype is 16-bit minimum in C and 32-bit minimum in C#. The long datatype is 32-bit minimum in C and 64-bit minimum in C#. That probably explains it, and C compiler sees long variables as 32-bit.

 

Also, use code tags next time. And that code has bugs.

Link to comment
Share on other sites

Link to post
Share on other sites

The int datatype is 16-bit minimum in C and 32-bit minimum in C#. The long datatype is 32-bit minimum in C and 64-bit minimum in C#. That probably explains it, and C compiler sees long variables as 32-bit.

That seems like it's the case. If I change the C# long to Int32 on variables a and b then the algorithm executes about twice as fast.

 

I'm a little confused as to why this is the issue. I tested to make sure and it doesn't seem to be because of an overflow issue and in both cases the value of a ends up being the same at the end of the function.

 

I don't see why int would be more efficient than long on a 64 bit system. Could it be related to Visual Studio being a 32 bit application? I've ran the app both from within visual studio and from the exe in the release folder without any difference in the result (testing both with x86 and x64 build targets).

Link to comment
Share on other sites

Link to post
Share on other sites

That seems like it's the case. If I change the C# long to Int32 on variables a and b then the algorithm executes about twice as fast.

 

I'm a little confused as to why this is the issue. I tested to make sure and it doesn't seem to be because of an overflow issue and in both cases the value of a ends up being the same at the end of the function.

 

I don't see why int would be more efficient than long on a 64 bit system. Could it be related to Visual Studio being a 32 bit application? I've ran the app both from within visual studio and from the exe in the release folder without any difference in the result (testing both with x86 and x64 build targets).

 

It doesn't matter if the system is 64-bit, the speed of algorithms related to math, such as addition and multiplication, is partially dependent on the length of the numbers. So, if you have numbers with high length, the program is going to be slower.

Link to comment
Share on other sites

Link to post
Share on other sites

The int datatype is 16-bit minimum in C and 32-bit minimum in C#. The long datatype is 32-bit minimum in C and 64-bit minimum in C#. That probably explains it, and C compiler sees long variables as 32-bit.

 

Also, use code tags next time. And that code has bugs.

I can assign (2^32)/2 -1 to an int in tcc and mingw, as well as the older non c99 version of tcc I downloaded by accident. Long int doesn't seem to be a 64 bit integer, though.

Does C have a 64 bit int data type?

 

As for the second point, I didn't actually write that algorithm. I found it on google and it seemed like a good one that'd take a long time.

 

That seems like it's the case. If I change the C# long to Int32 on variables a and b then the algorithm executes about twice as fast.

 

I'm a little confused as to why this is the issue. I tested to make sure and it doesn't seem to be because of an overflow issue and in both cases the value of a ends up being the same at the end of the function.

 

I don't see why int would be more efficient than long on a 64 bit system. Could it be related to Visual Studio being a 32 bit application? I've ran the app both from within visual studio and from the exe in the release folder without any difference in the result (testing both with x86 and x64 build targets).

I just tried that and it now takes about as much time as when written in c.

 

I guess that must have been the problem. Now it runs about 150 ms slower than tcc and 450 ms slower than gcc. That seems a lot more inline with what I was expecting.

 

It doesn't matter if the system is 64-bit, the speed of algorithms related to math, such as addition and multiplication, is partially dependent on the length of the numbers. So, if you have numbers with high length, the program is going to be slower.

It's probably more due to the decreased cache performance of larger data types.

Link to comment
Share on other sites

Link to post
Share on other sites

I can assign (2^32)/2 -1 to an int in tcc and mingw, as well as the older non c99 version of tcc I downloaded by accident. Long int doesn't seem to be a 64 bit integer, though.

Does C have a 64 bit int data type?

For 64-bit integer in C, it's long long.

 

It's probably more due to the decreased cache performance of larger data types.

...That's similar to what I said. Cache itself doesn't matter (unless you run out, which in this case, it doesn't). If you have larger data, then it takes more time to process...

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×