It is so interesting to break down how .NET framework works and uncover the internal components and functionalists. I love breaking things down so i can have better understanding about what I am dealing with.
In a previous blog post, we talked about what an Assembly is. In this blog post, I will be sharing my thoughts about the .NET run time, or CLR.
.NET Common Language Runtime “CLR” is a run-time environment and an execution engine, provided by the .NET framework that provides robust application support with a very small memory footprint and it performs its operations in a very fast way (15,000 managed method calls per second at 27.6MHz).
You can think of CLR as the interface between .NET applications and the operating system. This is way .NET applications are called (Managed Code), because they are managed by the CLR.
CLR Provides the following services for programming languages targeting it:
- Compiles Microsoft Intermediate Language (MSIL) into native code.
- Handles garbage collection.
- Handles exceptions.
- Enforces code access security.
- Handles verification.
- Debugging services.
- Verification of Type Safety.
For example, since the runtime uses exception to report errors, any language that uses CLR will also get errors reported via exception. And since CRL allows thread creation, any language that target the runtime can create threads.
Because the runtime only operates on IL code (Intermediate Language), the runtime is not aware of the programming language you are using. This means that it is up to the developer to pick his choice of programming language without losing anything.
When you write your .NET application using any programming language that targets CLR, your compiler (for example, Visual Studio) will help you check the syntax and analyze your source code. The compILER then will transform the source code to an assembly file. At runtime, the CLR Just In Time (JIT) compiler will transform the assembly written in IL code into machine language (Native Code) that the CPU can understand and execute.
Let us see what happens when you run an executable file for example:
- Windows checks the EXE header to see if the application requires a 32-bit or 64-bit address space.
- Windows loads the x86, x64, or ARM version of MSCorEE.dll into the process’s address space.
- The process’s primary thread calls a method defined inside MSCorEE.dll.
- This method initializes the CLR, loads the EXE assembly, and then calls its entry point method (Main).
One of the biggest benefits of compiling source code to IL is code security and verification. CLR performs a process called “verification” while compiling the code form IL to native CPU language. CLR will check the IL code and ensures that everything is safe by verifying for example that each method is called with the right number and type of parameters, and that each parameter passed to method, is of the correct type.
Another benefit is that you can run multiple managed application in a single O.S process. Why this is a big deal?
Well, usually in Windows, each process has its own virtual space, because if this is not the case, then an application can read and write to an invalid memory address. By verifying managed code, the code does not improperly access memory and can’t adversely affect another application’s code. This means that you can run multiple managed applications in a single Windows virtual address space. Since having multiple processes will harm the performance of O.S, having multiple managed applications running under the same process is definitely a welcomed thing.
This leads to another question, what is safe and unsafe code? In simple words, safe code do not access memory addresses directly and manipulate bytes, while unsafe code does. Safe code is verifiably safe, like the one managed by CLR. Nevertheless, you can still write unsafe code using C#, but you have to mark all methods that contains unsafe code with “unsafe” keyword, and the C# sharp complier requires that you compile your source code with /unsafe compiler switch.