Nearly two decades after their introduction, C# and .NET remain major parts of the enterprise software world. It has been said that C# and .NET were Microsoft’s response to Java — a managed code compiler system and universal runtime — and so many comparisons between C and Java also hold up for C and C#/.NET.
Like Java (and to some extent Python), .NET offers portability across a variety of platforms and a vast ecosystem of integrated software. These are no small advantages given how much enterprise-oriented development takes place in the .NET world.
When you develop a program in C#, or any other .NET language, you are able to draw on a universe of tools and libraries written for the .NET runtime.
Another Java-like .NET advantage is JIT optimisation. C# and .NET programs can be compiled ahead of time as per C, but they’re mainly just-in-time compiled by the .NET runtime and optimised with runtime information. JIT compilation allows all sorts of in-place optimisations for a running .NET program that can’t be done in C.
Like C (and Java, to a degree), C# and .NET provide various mechanisms for accessing memory directly. Heap, stack, and unmanaged system memory are all accessible via .NET APIs and objects. And developers can use the unsafe mode in .NET to achieve even greater performance.
None of this comes for free, though. Managed objects and unsafe objects cannot be arbitrarily exchanged, and marshaling between them incurs a performance cost. Therefore, maximising the performance of .NET applications means keeping movement between managed and unmanaged objects to a minimum.
When you can’t afford to pay the penalty for managed versus unmanaged memory, or when the .NET runtime is a poor choice for the target environment (e.g., kernel space) or may not be available at all, then C is what you need. And unlike C# and .NET, C unlocks direct memory access by default.