Skip to main content

Deploying a self contained .Net core application on Linux and run as a daemon process


Recently I got a chance to play little bit with .NET core on Linux where I developed an application in .Net core which I deployed on an Ubuntu 16.04 machine and configured to run it as a daemon process.

In this process I learnt quite a few things which might be helpful for beginners in .NET core or for the folks who would like to play around with .NET core.

Choosing the deployment model:
Two types of deployments can be created for .NET core applications:

  1. Framework-dependent deployment: As the name implies, framework-dependent deployment (FDD) relies on a shared system-wide version of .NET Core to be present on the target machine. Because, .NET Core is already present, it can be shared by multiple applications hosted on the target machine.
  2. Self-contained deployment: Unlike FDD, a self-contained deployment does not rely on any shared components to be present on the target machine. All components, including both .NET Core libraries and the .NET Core runtime are included with the application and are isolated from other .NET Core applications.
I chose Self-contained deployment since I didn't intend to run any other application on the target system which would share a system wide .NET Core runtime.

Preparing for Self-Contained Deployment:
  1. Open "project.json" and remove this piece of json property:
    "type":"platform"
    Removing "type":"platform" attribute indicates that the framework is provided as a set of components local to our app, rather than as a system-wide platform package.
  2. Create a "runtimes"section in your project.json file that defines the platforms your app targets, and specify the runtime identifier of each platform that you target. See Runtime Identifier Catalog for a list of runtime identifiers. For example, the following "runtimes" section indicates that the app runs on 64-bit Windows 10 and on 64 bit Ubuntu 16.04.
    "runtimes": {
        "ubuntu.16.04-x64": {},
        "win10-x64": {}
      }
    
    A complete "project.json" would look something like this:
    {
      "version": "1.0.0-*",
      "buildOptions": {
        "debugType": "portable",
        "emitEntryPoint": true,
        "copyToOutput":["appsettings.json","anyfile.txt"] //any file you would want to be copied to the output directory
      },
      "dependencies": {
        "Microsoft.EntityFrameworkCore.SqlServer":"1.1.0",
        "Microsoft.NETCore.App":"1.1.0", // All your dependencies goes here.All these packages will get restored during build.
    },
      "frameworks": {
        "netcoreapp1.0": {
          "imports":"dnxcore50"
        }
      },
      "runtimes": {
        "win10-x64": {},
        "osx.10.10-x64": {}
      },
     "publishOptions":{
       "include":["appsettings.release.json","anyfile.txt"] //any file you would want to be included to your published package
    }
    
Publishing your application package:
After you have debugged and tested the program, you can create the files to be deployed with your app for each platform that it targets by using <i>"dotnet publish"</i> command as follows:
dotnet publish -c release -r "ubuntu.16.04-x64" -o "publishDirectory"
Once you execute the above command, your app will be compiled with "release" configuration for runtime "ubuntu.16.04 x-64" and will be published to the publish directory.

Now deploy the package to the target system:
 In this process, two tools come very handy:

  • pscp.exe: An SCP client, i.e. command-line secure file copy to your target machine.
  • plink.exe: A command-line interface to the PuTTY backends which is SSH and Telnet client itself.
In my case, the deployment process was as simple as just transferring the published package to the target system and executing some command to install it as a daemon process on the target system.
So, my deployment script looks something like as follows:
  1. Transferring published package:
    "pscp.exe" publishedPackage.zip remoteUser@remoteMachineIP:/home/publishedPackage.zip
  2. Install the package on target system and "daemonize" it:
    "plink.exe" remoteUser@remoteMachineIP -m "commands.txt"
    where "commands.txt" contains all the commands which are required to be executed on the target system to install and daemonize your application.
  3. A few tweaks here and there to run my application as a daemon process. "daemonize.sh", a templated bash script came very handy in this process.
Run your deployed application on target machine:
After deploying the application when I ran it for the first time, it was a disappointing oops moment for me.I received an ugly error saying:
Failed to load /opt/dotnet/shared/Microsoft.NETCore.App/1.0.1/libcoreclr.so, error: libunwind.so.8: cannot open shared object file: No such file or directory
Failed to bind to CoreCLR at '/opt/dotnet/shared/Microsoft.NETCore.App/1.0.1/libcoreclr.so
After a little bit of research and googling around this error, I found that I need to install "libunwind" on the target machine.
"libunwind" is a portable C programming interface to determine the call-chain of a program. I assume that it works as an initial bridge between CoreCLR and Linux.
And I am done!! My application works like charm!!

I faced quite a few limitations while developing my application(and I am sure there are hundreds of more out there, since .NET Core is not yet that mature as a platform).

  • The Reflection Api has changed a lot.
  • There is no support for ADO .NET disconnected architecture. No Datasets and DataAdapters!!
That's all folks! Hope this helps some of you who have just started playing with .NET Core.

Comments

  1. This comment has been removed by the author.

    ReplyDelete
  2. Please put into github all files , including "where "commands.txt" contains all the commands which are required to be executed on the target system to install and daemonize your application."

    ReplyDelete

Post a Comment

Popular posts from this blog

Hash tagging Redis keys in a clustered environment

CodeProject Hello folks, In this post, we'll talk a little bit about Redis cluster. Limitations of Redis cluster. How we can overcome the limitations of redis cluster. Redis cluster is a distributed implementation of Redis with three major goals: High performance and linear scalability upto 1000 nodes. Acceptable degree of write safety. Availability: Redis cluster is able to survive partitions where the majority of the master nodes are reachable and there is at least one reachable slave for every master node that is no longer reachable. As an example, let's take a look at the following cluster configuration with six nodes. Node A (master node) Node B (master node) Node C (master node) Node D (slave of master node A) Node E (slave of master node B) Node F (slave of master node C) Now at this point, a natural question may arise, "When I write to a Redis key, how a node is picked up to store that key or what are the factors that decide which node to sto

Increase performance by using Cache-Control header in Blob Storage

CodeProject Hello Folks, Caching has always been an important part of a good application design and performance optimization. Fetching something over the network is both slow and expensive; large responses require many round-trips between the client and server, which delays when they are available and can be processed by the browser, and also incurs data costs for the visitor. As a result, the ability to cache and reuse previously fetched resources is a critical aspect of optimizing for performance. In this post, we'll see how we can optimize performance by using "Cache-Control" header in Azure Blob Storage.For this, I assume you have an Azure subscription and have a Storage account. Let's see step by step how we can add "Cache-Control" header to a block blob. In this example we'll upload an image to Azure Blob Storage from a simple ASP.Net application. 1. Go to Azure Portal -> Storage->Containers and create a new container. You can selec