Common Tasks

Common Tasks

Table of Contents

Create new Instance

The most basic instance definition contains 3 properties on the top-level. Those are a name, an organization ID, and a “plan” which defines the instance’s size. To get more information about how the plans work, please read the documentation here. Once you have chosen a plan you can start writing an instance definition in a JSON file, the most basic structure looks like this:

{
  "org": 123,
  "name": "mytestinstance",
  "plan": {
    "Name": "Small",
    "ScaleFactor": 1
  }
}

If you’re unsure what ScaleFactor to choose, it is generally a good idea to start with 1 and increase it later if necessary.

Following are some more questions to consider when creating an instance:

Storage Schema

What are the retention requirements for the new instance? The schema gets defined as described in the Metrictank examples.

A basic schema to keep all data at 1s resolution for 8d, at 1min resolution for 60d and at 30min resolution for 2y looks like this:

[default]
pattern = .*
retentions = 1s:8d:1h:2,1m:60d:6h:2,30m:2y:6h:2

The schema definition needs to be added into the instance configuration in JSON format, the previous example configuration with the schema added to it, would look like this:

{
  "org": 123,
  "name": "mytestinstance",
  "plan": {
    "Name": "Small",
    "ScaleFactor": 1
  },
  "storage":{
    "schemas":"[default]\npattern = .*\nretentions = 1s:8d:1h:2,1m:60d:6h:2,30m:2y:6h:2\n"
  }
}

Other Metrictank configuration files

In similar fashion to the storage schemas configuration, it is also possible to configure Metrictank’s storage aggregations and index rules files. Their format and what they do are documented here: storage aggregations / index rules

These are the default settings for these two files:

Storage Aggregations

[default]
pattern = .*
xFilesFactor = 0.1
aggregationMethod = avg,sum

Index Rules

[default]
pattern = 
max-stale = 768h

Both of these files will also need to be added into the instance configuration JSON file as a string property, so it will look like this:

{
  "org": 123,
  "name": "mytestinstance",
  "plan": {
    "Name": "Small",
    "ScaleFactor": 1
  },
  "storage": {
    "schemas": "[default]\npattern = .*\nretentions = 1s:8d:1h:2,1m:60d:6h:2,30m:2y:6h:2\n",
    "aggregations": "[default]\npattern = .*\nxFilesFactor = 0.1\naggregationMethod = avg,sum\n"
  },
  "indexRules": "[default]\npattern =\nmax-stale = 768h\n"
}

Defining the authentication keys

Similar like the above described files, the instance definition also includes the file tsdb-auth.ini which is used by Tsdb-Gw to read the authentication keys from and associate them with an organization id. If this file is not provided at instance creation, then HM-API will generate default values for it with random API keys. The random API keys can be obtained by looking at the instance config after the instance has been created, or alternatively they can be pre-defined before the instance gets created.

This is a basic example tsdb-auth.ini:

[TheAdminKey]
isAdmin = true
orgId = 1

[AnotherKey]
isAdmin = false
orgId = 1

Unlike the other previously described files, the tsdb-auth.ini does not get stored as a string property in JSON, it has a predefined struct in which it gets defined. The tsdb-auth.ini file content will then be generated from that struct and mounted in the Tsdb-Gw pods. Our basic example instance definition with the above tsdb-auth data added to it would look like this:

{
  "org": 123,
  "name": "mytestinstance",
  "plan": {
    "Name": "Small",
    "ScaleFactor": 1
  },
  "storage": {
    "schemas": "[default]\npattern = .*\nretentions = 1s:8d:1h:2,1m:60d:6h:2,30m:2y:6h:2\n",
    "aggregations": "[default]\npattern = .*\nxFilesFactor = 0.1\naggregationMethod = avg,sum\n"
  },
  "indexRules": "[default]\npattern =\nmax-stale = 768h\n",
  "tsdbAuth": {
    "TheAdminKey": {
      "isAdmin": "true",
      "orgId": "1"
    },
    "AnotherKey": {
      "isAdmin": "false",
      "orgId": "1"
    }
  }
}

Creating the instance according to the JSON definition

Once the instance definition is complete in a JSON file, the file’s content needs to be posted into HM-API. The API endpoint to do so is documented here.

Assuming your instance is defined in the file my-instance.json, then the call to create it would look like this:

~$ curl \
  -u user:pass \
  -H 'Content-Type: application/json' \
  -d@my-instance.json \
  http://localhost:8080/instance
"hm-deploy-123-mytestinstance-1"

Update global defaults and roll them out

All global defaults are stored in the HM-API configmap, by default it is named hm-api-config unless specified otherwise with the -config-map parameter. For details about this config map, please refer to HM-API ConfigMap documentation. A common use case for using those global defaults is to update a service version across all instances.

Updating Metrictank to a newer version across all instances

To update the global defaults, use the kubectl command to open them in an editor. Assuming the namespace is named metrictank and the config map is named hm-api-config this would look like:

kubectl -n metrictank edit configmap hm-api-config

Inside the configmap look for METRICTANK_VERSION under the top level key config-defaults.json and update it to the version you would like to deploy. Note that this global default will only be applied to instances which don’t override it in their instance config. If you would like to read more about instance configs, please refer to the Instance ConfigMap docs.

Once the global default is updated, the existing instances are not in sync with the configuration anymore because they will not be updated automatically. To get a list of all instances that are out of sync, use the /instances/statuslist. This endpoint is documented here.

~$ curl \
  -u user:pass \
  http://localhost:8080/instance/statuslist
X   instance1
X   anotherinstance

The X means the instance is out of date. If one of the listed instances would show an OK then this would mean that it is in sync with its configuration, so it probably has an instance-specific override for the Metrictank version and the change fo the global default doesn’t affect it, or it already has the newer Metrictank version due to a manual edit of the Kubernetes deployments. Now each of the listed instances with an X needs to be updated to bring them in sync with the stored configuration. Before running the update it’s usually a good idea to look at the exact diff which will be applied to the Kubernetes resources when the update runs, to do so please use the diff endpoint which is documented here.

~$ curl \
  -u user:pass \
  http://localhost:8080/instance/1/instance1/diff
--- deployments: mt-write00-1-small-instance1, mt-read00-1-small-instance1-a ---
--- Old
+++ New
       "name": "metrictank",
-      "image": "us.gcr.io/metrictank-gcr/metrictank:v0.11.0",
+      "image": "us.gcr.io/metrictank-gcr/metrictank:newerVersiv0.11.0-152-ga7084bfon",
       "ports": [

When you are sure that you want this diff to be applied to the instance, just call the update endpoint on it to kick off the rollout. This endpoint is documented here.

~$ curl \
  -X PUT \
  -u user:pass \
  http://localhost:8080/instance/1/testinstance1/update' 
"hm-deploy-1-testinstance1-4"

The printed string hm-deploy-1-testinstance1-4 is the identifier of a normal deployment job, which will perform the rollout while respecting the defined rollout concurrency as documented here

Deleting an instance

Deleting an instance is relatively simple. There is one choice that needs to be made: Whether the data in Kafka and in Cassandra should be deleted as well or not. The delete endpoint accepts a parameter called keepData, which defines this. By default the data will be deleted. The delete endpoint is documented here.

This is an example of how to do delete an instance, while keeping its data in Kafka and Cassandra:

~$ curl \
  -X DELETE \
  -u user:pass \
  -H 'Content-Type: application/json' \
  -d '{"name": "testinstance1", "org": 1, "keepData": true}' \
  http://localhost:6060/instance
"hm-delete-1-testinstance1"

If later a new instance with the same specifications gets created, it will still be able to access the data on Cassandra. But remember that even if the data doesn’t get deleted from Kafka, if it doesn’t get consumed and persisted by Metrictank it will expire and get purged from Kafka.