Clustering Octopus Tentacles

Problem: you have a Windows Cluster and you want to do deployments with Octopus Deploy. Therefore you need to run an Octopus Tentacle on the cluster nodes. The problem is that an Octopus Tentacle isn’t cluster-aware. In this post I’m going to explain how to solve this problem.

Clustering the Octopus Tentacle

Assuming you have a 2 node cluster, install the Octopus Tentacle on both nodes just like you would do on a non-clustered machine. Then start the Failover Cluster Manager and go to ‘Roles’:

Failover Cluster Manager - Configure Roles I

(Sensitive information like the owner node and IP address is removed in this image, just as it is in the next images)

In the High Availability Wizard, Select Role tab, click ‘Generic Service’:

HighAvailabilityWizard - Select Role

In the Select Service tab, click ‘OctopusDeploy Tentacle’:

HighAvailabilityWizard - Select Service

In the Client Access Point tab, enter a name and an IP address (if you’re not a domain/network admin, you probably need some help from the person who is):

HighAvailabilityWizard - CAP

Skip the ‘Select Storage’ tab, you don’t need shared storage for this. In the Replicate Registry Settings, Add Registry Key ‘Software\Octopus’:

HighAvailabilityWizard - Replicate Registry Settings

Confirm and Configure High Availability and your Octopus Tentacle is clustered!

Failover Cluster Manager - After

If your newly created role doesn’t come online, it’s probably because it isn’t (properly) added as a server in the domain. If you are not the Domain Admin, talk to the person who is to fix this.

Generate a New Certificate and import it on both machines

Because the Octopus Tentacles on both machines have different thumbprints and use different certificates, you need to create a new certificate on one of the machines and import it on both. This way, if a failover occurs, the Octopus Tentacle keeps running with the same thumbprint and the Octopus Server will recognize it as the same tentacle.  Here’s how to create a new certificate and import it (running Octopus Tentacle 3.2.21). Open a command prompt on one of the machines, go to the Octopus Tentacle folder and stop the tentacle (tentacle service –stop). Then issue the command ‘tentacle.exe new-certificate -e MyFile.txt’:

Octopus - Export Certificate

On the same machine, import the certificate by issuing the command ’tentacle.exe import-certificate –instance “Tentacle” -f “MyFile.txt –console’:

Octopus - Import Certificate

Start the tentacle again (tentacle service –start). Copy the file with the certificate to your other machine, stop the tentacle and do the same import. Test by moving your newly created Failover Cluster Tentacle between the nodes and also have a look in the Octopus Tentacle Manager on each node (are the thumbprints the same on each node?). That’s it, you’re all set. Now you can add the newly created role as a machine on the Octopus Server and do deployments to your cluster!

3 thoughts on “Clustering Octopus Tentacles”

  1. So, you *can* do this, and it’s true you end up with a Tentacle instance that fails over with a particular cluster resource group, and there is only one of them in existence at any one time.

    However, this precludes using the tentacle to deploy resources that actually need to go out to the cluster nodes themselves (for example custom SSRS extension DLLs, components going into the GAC, custom SSIS extensions, SSIS packages stored on the filesystem (on non-clustered disks), other services co-hosted on the SQL infrastructure etc… etc… And if you don’t setup your clustered tentacle to use shared storage, any installs ‘to the cluster’ that actually leverage the Octopus deployment directory (I’m thinking SQL agent jobs that run a PowerShell script that’s stored on disk) will fail after failover. And you need to do this per cluster resource group, not per cluster. Finally I’m not sure how any of this affects Octopus’ retention policies for deployed packages.

    Alternative approaches that cater for some of this are:
    – have two types of tentacles (one slaved to the cluster resource group as above, and one each for each of the actual cluster nodes), or
    – just have two tentacles (one per cluster node), and just have a special ‘primary’ tag for one of them that resource-group targeting deployments (eg SQL database) use (this seems to be the most common setup, anecdotally), or
    – just have two tentacles (one per cluster node) and make your *deployment process* cluster aware, i.e.: make it run as a serial rolling deployment, and run idempotently if it’s already executed.

    After going through this a few times I came to the conclusion that the latter was actually the better option.

    (A new option now is presumably a central-server package step, but I know very little about these)

    1. You are right, I only focused on one tentacle to be able to fail over with one particular resource group (a SQL Server instance in this case). Thanks for the additional thoughts Piers!

  2. Chris Mullins kindly let me know via email that it’s also possible to add the Octopus tentacle as a resource in a SQL Server Cluster Role:

    Hello Devops DBA,

    I just found your page on configuring Octopus on a SQL cluster. I
    wish I had found this a year ago, I basically had to piece this
    together from an Octopus blog that wasn’t very clear. So we have been running it the way you documented for a year now, but it never worked exactly right for us so I was digging a little deeper since we are building new environments. The problem we had with this install is sometimes SQL would be on a different node than the Octopus tentacle and then we couldn’t deploy because the tentacle ran as Local System and did not have rights on the other node.

    To fix this, we finally found where you can just add the resource
    under SQL Server (right click SQL Server cluster resource -> Add
    Resource). Now, when SQL fails over it takes Octopus with it and it
    works perfectly.

    Just thought I would share if you are interested.


    Thanks Chris!

Leave a Reply

Your email address will not be published. Required fields are marked *