Getting Started
Prerequisites
This service need cmake and a c++ compiler, to facilitate the build process a docker image with all the requirements is provided, in this case you only need to have installed docker, docker-compose and an-ci.
First of all you need to build the docker-image
cd dockerfiles/cpp-builder/
docker build -t mhconfig-builder:0.3 .
After that go to the root path, prepare the thirdparties and build the server
cd ../..
git submodule update --init
an-ci third_party
an-ci build
You could find the executable in ./build/mhconfig
Execution
It's possible to run the program in two modes, server and local.
Server
The program creates a grpc server that takes care of calculating and responding to the configuration.
./mhconfig server <daemon config path> <gRPC listen address> <prometheus listen address> <num grpc threads> <num workers>
For example
SPDLOG_LEVEL=debug ./mhconfig server daemon_config 0.0.0.0:2222 0.0.0.0:1111 13 13
To test it you could implement a client using the protobuf file ./src/mhconfig/proto/mhconfig.proto
or using some tool like grpcurl or ghz.
PS: the logger configuration format could be obtained from spdlog.
PS2: for the moment it isn't possible to reload the configuration of the daemon.
Local
The program calculates the overrides passed by the standard input and returns them as yaml documents by the standard output, separating the arguments blocks by spaces.
The arguments it read are:
- The first line is the root path of the configuration as a absolute path.
- The second line is document name of the files to calculate the overrides.
- The rest of lines are the labels to use for the overrides with the format <key>/<value>.
For example
SPDLOG_LEVEL=debug ./mhconfig local
Where with the input
/mnt/mhconfig/config
database
app/calendar
env/dev
/mnt/mhconfig/config
this_doc_dont_exists
app/calendar
env/dev
It prints
---
port: 4322
database: "test_calendar"
host: "127.0.0.1"
table: "days"
...
---
!undefined ~
...
Configuration
The configuration use YAML files and the mapping between files and configuration is simple, the file name is the first level of the configuration and the file content is appended to this first level.
Labels
The labels allow you to define specific characteristics of the configuration to be created, for example the type of environment, the name of the service, the type of secrets, etc.
The algorithm used to calculate the order of the configurations is as follows.
- All stored configurations that are a subset of the requested labels are obtained.
- Those configurations are first ordered by the number of labels that form it, from less to more specialization.
- For those configurations with the same number of labels, an ordered list with the weights of the label keys is created and the configurations are ordered with respect to this list.
For this it is necessary to have a configuration with metadata of the labels as their weight, this content has to be stored in the mhconfig.yaml
document of the root of the configuration. This file has the structure:
labels_metadata:
low:
weight: 1
high:
weight: 2
The labels are represented in a hierarchical way in the directory tree, being able to represent the label with key env
and value dev
in two ways, with the folders env/dev
or with the folder env=dev
. You can find some examples in the examples folder.
Tags
Some custom tags are allowed to facilitate the configuration administration.
!format
allow format a string with some parameters.!ref
insert the configuration of another file if don't exists a circular dependency.!sref
insert a scalar or a null element from the same configuration file.!delete
remove the previous element with that path.!override
force override one element instead merge it.!merge
merge a list of elements in order.!!str
,!!binary
,!!int
,!!float
and!!bool
defines the type of a scalar.
Format
The !format
tag allow create a string from some named variables and it works by adding the scalar of the path enclosed in parentheses. The syntax is the next:
!format "{database/host}:{database/port} (database: {database/database}, table: {database/table})"
So if you have a document named database
with the content
host: 127.0.0.1
port: 2332
database: test_me
table: items
It make the scalar 127.0.0.1:2332 (database: test_me, table: items)
.
Reference and scalar reference
The tag !ref
allow use some configuration provided in another configuration file if don't exists circular dependencies, also it's possible override that configuration in a next override. Like in some cases it's necessary reuse some element provided in the same configuration document the !sref
allow do that but only if the referenced element is a scalar or null.
Delete
If it's necessary remove the content of a previous override it's possible do that with the !delete
tag.
Override
By default only it's possible merge elements of the same kind with the overrides, to force the override it's necessary use the tag !override
.
Merge
It combine multiple elements of a sequence in order allowing to emulate the !override
tag with a tagged element or to inject the elements of a configuration document as default elements.
emulate_override: !merge
- !delete
- !XXX ...
inject_default_elements: !merge
- !ref [parameters, default]
- to_override: 42
Special files
By default only configuration files with the .yaml
extension are read, but it is possible to specify that a file is special if it starts with a underscore. These files follow the same logic as configuration files but have a prefix (the special file type) and a suffix (the file extension), and at the moment there are two: binary files and text files.
For example, file _bin.secret.p12
would be a binary file, with the document name _bin.secret.p12
.
Restrictions
The overrides have some restrictions:
- A scalar or null element can only be overwritten with another scalar or null element, if you want to overwrite it you have to use the
!override
tag to force it. - A map or a list can only be merged with another element of the same type, if you want to overwrite it you must use the
!override
tag. - The
!ref
tag can only be used if the resulting configuration does not have cycles, that is, you cannot refer to a configuration directly or indirectly uses the self configuration. - You can only use the sref tag for scalars in the document.
Debug
To help debug the returned configuration and what operations have been performed there is a logger system to help debug the returned configuration and what operations have been performed and it allows you to obtain:
- The importance of notification, which has the
error
,warn
,info
,debug
andtrace
level. - A message with a description.
- Optionally the file with the line and column where the element is defined.
- Optionally the file with the line and column of the element affecting the previous one.
In addition to the logger system, it is possible to obtain the file, the line and the column of the returned elements if a level equal or higher than info
is requested.
It's possible to find a basic client to check the configuration here that allow use a git blame like and logger to check the operations. An example of its use can be:
➜ python MHCONFIG_AUTH_TOKEN=minimal python checker.py /mnt/data/mhconfig test low/priority high/priority --log-level trace
f3ce5ea3 low=priority/test.yaml:2:3 1) "hola":
79c10114 low=priority/mio.yaml:1:1 2) "aaaaa":
79c10114 low=priority/mio.yaml:1:11 3) "database": "test"
79c10114 low=priority/mio.yaml:3:7 4) "host": "127.0.0.1"
79c10114 low=priority/mio.yaml:4:7 5) "port": 1111
79c10114 low=priority/mio.yaml:2:8 6) "table": "whatever"
f3ce5ea3 low=priority/test.yaml:6:10 7) "adios": "hola"
12f8bd15 high/priority/test.yaml:5:10 8) "mundo": "tuyo"
12f8bd15 high/priority/test.yaml:7:7 9) "o2": "127.0.0.1:1111 (database: test, table: whatever)"
f3ce5ea3 low=priority/test.yaml:3:10 10) "perro": "o_no"
12f8bd15 high/priority/test.yaml:6:8 11) "seq":
12f8bd15 high/priority/test.yaml:6:9 12) - null
f3ce5ea3 low=priority/test.yaml:7:16 13) "some_binary": !!binary |-
f3ce5ea3 low=priority/test.yaml:7:16 14) TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwg
f3ce5ea3 low=priority/test.yaml:7:16 15) c2VkIGRpYW0gbm9udW15IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9y
f3ce5ea3 low=priority/test.yaml:7:16 16) ZSBtYWduYSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3MgZXQg
f3ce5ea3 low=priority/test.yaml:7:16 17) YWNjdXNhbSBldCBqdXN0byBkdW8gZG9sb3JlcyBldCBlYSByZWJ1bS4gU3RldCBjbGl0YSBrYXNk
f3ce5ea3 low=priority/test.yaml:7:16 18) IGd1YmVyZ3Jlbiwgbm8gc2VhIHRha2ltYXRhIHNhbmN0dXMgZXN0IExvcmVtIGlwc3VtIGRvbG9y
f3ce5ea3 low=priority/test.yaml:7:16 19) IHNpdCBhbWV0Lgo=
75ea914e low=priority/_text.name.ext:1:1 20) "some_external_file": "I'm a text file\n"
f3ce5ea3 low=priority/test.yaml:3:10 21) "test": "o_no"
12f8bd15 high/priority/test.yaml:3:5 22) "to_delete":
12f8bd15 high/priority/test.yaml:3:7 23) -
12f8bd15 high/priority/test.yaml:4:14 24) "adios": 2
d7c5a0dd: mhconfig.yaml:3:13: trace: Created int64 value from plain scalar
weight: 1
^
d7c5a0dd: mhconfig.yaml:3:5: trace: Created map value
weight: 1
^
d7c5a0dd: mhconfig.yaml:5:13: trace: Created int64 value from plain scalar
weight: 2
^
d7c5a0dd: mhconfig.yaml:5:5: trace: Created map value
weight: 2
^
...
f3ce5ea3: low=priority/test.yaml:2:3: debug: Adding new map key
mundo: whatever
^----|
seq: [~] # 12f8bd15: high/priority/test.yaml:6:8
f3ce5ea3: low=priority/test.yaml:2:3: debug: Adding new map key
mundo: whatever
^-|
- hola: !delete # 12f8bd15: high/priority/test.yaml:3:5
12f8bd15: high/priority/test.yaml:3:13: warn: Removing an unused deletion node
- hola: !delete
^
f3ce5ea3: low=priority/test.yaml:4:10: debug: Applied ref
aaaaa: !ref [mio]
|--------^
database: test # 79c10114: low=priority/mio.yaml:1:1
f3ce5ea3: low=priority/test.yaml:14:23: debug: Applied ref
some_external_file: !ref [_text.name.ext]
|---------------------^
I'm a text file # 75ea914e: low=priority/_text.name.ext:1:1
f3ce5ea3: low=priority/test.yaml:5:9: debug: Applied sref
test: !sref [hola, perro]
^|
perro: o_no # f3ce5ea3: low=priority/test.yaml:3:10
12f8bd15: high/priority/test.yaml:7:7: debug: Applied ref
o2: !format "{mio/host}:{mio/port} (database: {mio/database}, table: {mio/table})"
^
host: 127.0.0.1 # 79c10114: low=priority/mio.yaml:3:7
12f8bd15: high/priority/test.yaml:7:7: debug: Applied ref
o2: !format "{mio/host}:{mio/port} (database: {mio/database}, table: {mio/table})"
^
port: 1111 # 79c10114: low=priority/mio.yaml:4:7
12f8bd15: high/priority/test.yaml:7:7: debug: Applied ref
o2: !format "{mio/host}:{mio/port} (database: {mio/database}, table: {mio/table})"
^---|
database: test # 79c10114: low=priority/mio.yaml:1:11
12f8bd15: high/priority/test.yaml:7:7: debug: Applied ref
o2: !format "{mio/host}:{mio/port} (database: {mio/database}, table: {mio/table})"
^|
table: whatever # 79c10114: low=priority/mio.yaml:2:8
12f8bd15: high/priority/test.yaml:7:7: debug: Obtained template parameter
o2: !format "{mio/host}:{mio/port} (database: {mio/database}, table: {mio/table})"
^
host: 127.0.0.1 # 79c10114: low=priority/mio.yaml:3:7
12f8bd15: high/priority/test.yaml:7:7: debug: Obtained template parameter
o2: !format "{mio/host}:{mio/port} (database: {mio/database}, table: {mio/table})"
^
port: 1111 # 79c10114: low=priority/mio.yaml:4:7
12f8bd15: high/priority/test.yaml:7:7: debug: Obtained template parameter
o2: !format "{mio/host}:{mio/port} (database: {mio/database}, table: {mio/table})"
^---|
database: test # 79c10114: low=priority/mio.yaml:1:11
12f8bd15: high/priority/test.yaml:7:7: debug: Obtained template parameter
o2: !format "{mio/host}:{mio/port} (database: {mio/database}, table: {mio/table})"
^|
table: whatever # 79c10114: low=priority/mio.yaml:2:8
12f8bd15: high/priority/test.yaml:7:7: debug: Formated string
o2: !format "{mio/host}:{mio/port} (database: {mio/database}, table: {mio/table})"
^
Access Control Lists
It is possible to configure the server to control access to sensitive resources or procedures, that said, the permission logic can be divided into the following parts: tokens and policies.
Tokens
A token allows to authenticate an entity and configure some authentication parameters, although at the moment it only allows you to define the expiration date of the token through a unix timestamp. The tokens are defined in the tokens.yaml
root file of the configuration folder as a YAML file, this file has the following structure:
tokens:
- value: root
expire_at: 0
labels:
kind: root
- value: dev
expire_at: 0
labels:
kind: developer
env: dev
- value: calendar-prod
expire_at: 0
labels:
kind: app
env: prod
app: calendar
Each token has an associated expiration date and the labels to be used in the configuration folder to obtain the configuration of the request.
Policies
A policy allows you to restrict access to resources based on the capabilities, the root path and the labels, and it has the following structure:
capabilities: [GET, WATCH, TRACE]
root_paths:
- path: /mnt/data/mhconfig/+
capabilities: [GET, WATCH, TRACE]
labels:
- key: test
value: me
capabilities: [GET, WATCH]
- key: test
capabilities: [GET, WATCH]
- capabilities: [GET, WATCH, TRACE]
Where the possible capabilities are: GET
, WATCH
, TRACE
, UPDATE
(it only check the root_path
) and RUN_GC
(it only check the entity capability).
A path could have two special characters:
- The character
+
to denote any value within a single path segment. - The character
*
to denote any number of path segments, this can only be used at the end of the path.
And a label rule that don't have a key affects all the labels, one with key but without value affect all the labels with that key and another with a key and value affect only to that values.
PS: Take in mind that because the list merge logic the lists are read in reverse order.
API
A gRPC API is used to communicate with the daemon, where most of the methods that make use of a configuration path return a namespace identifier. This value is a 64bits unsigned integer generated randomly and its purpose is to tell the client if the previously asked information is still valid.
Also this kind of responses return a version number that allow reproducible results for some time in the case that the configuration has been updated between two calls. Take in mind that returns a different value does not mean that the required configuration has been changed, but rather that some configuration has been updated.
If you want to check if the content is different you can use the checksum field (it's a sha256 of the configuration content), so it can be useful to deduplicate the documents or decide to launch the watchers callbacks after receiving a different namespace identifier.
Get
This method allow retrieve a configuration and the protobuf definition is:
message GetRequest {
string root_path = 1;
repeated Label labels = 2;
// the version of the configuration to retrieve, it retrieve the latest
// version with a zero value and the proper version otherwise.
uint32 version = 3;
string document = 4;
LogLevel log_level = 5;
}
message GetResponse {
uint64 namespace_id = 1;
// the returned version, it's the last version if the asked version was the zero.
uint32 version = 2;
repeated Element elements = 3;
// A checksum of the file content, being a sha256 checksum the current implementation
bytes checksum = 4;
repeated Log logs = 5;
// The origin files of the composed config & logs
repeated Source sources = 6;
}
Update
This method allow notify that a configuration file has been changed. It's necessary to specify that the updates comply with three properties: atomicity because all the configuration changes are visible at the same time, consistency because if some configuration file is bad formed one of them will be updated and isolation because it's possible retrieve the configurations like in some snapshot (at least for some time).
message UpdateRequest {
// the root path of the namespace that has been changed.
string root_path = 1;
// the list of paths to check for changes, this apply to creations,
// modifications and deletions.
repeated string relative_paths = 2;
// reload all the config files of the namespace, if this flag is
// on the relative_paths are ignored
bool reload = 3;
}
message UpdateResponse {
uint64 namespace_id = 1;
enum Status {
OK = 0;
ERROR = 1;
}
Status status = 2;
// the new version.
uint32 version = 3;
}
Watch
To avoid sporadic requests to the service to detect configuration changes it's possible watch some configuration documents, it this case a stream is created and the service will send the new configuration when some change take place.
message WatchRequest {
// the uid to assign to the watcher (this is assigned by the client).
uint32 uid = 1;
// if the purpose of the call is remove the watcher with the provided uid.
bool remove = 2;
// the root path of the namespace to obtain the configuration.
string root_path = 3;
// the list of labels to watch.
repeated Label labels = 4;
// document to watch.
string document = 5;
LogLevel log_level = 6;
}
message WatchResponse {
enum Status {
OK = 0;
ERROR = 1;
UID_IN_USE = 2;
// the provided uid don't exists in the system.
UNKNOWN_UID = 3;
// the provided uid has been removed, keep in mind that it's possible
// to have career conditions in this method if:
// - a file has been changed and processing of the watcher has begun
// - an watcher has been requested to be removed
// - it's returned that the watcher has been removed
// - a configuration with the removed watcher identifier is returned
// PS: the server could remove the watcher if the namespace has been
// deleted and it's the client responsibility to re-register if it
// wish to do so
REMOVED = 4;
// the provided token don't have the permisions to watch the asked document.
PERMISSION_DENIED = 5;
// some of the provided arguments are invalid.
INVALID_ARGUMENT = 6;
}
Status status = 1;
uint32 uid = 2;
uint64 namespace_id = 3;
// the returned version, it's the last version if the asked version was the zero.
uint32 version = 4;
repeated Element elements = 5;
// A checksum of the file content, being a sha256 checksum the current implementation
bytes checksum = 6;
repeated Log logs = 7;
// The origin files of the composed config & logs
repeated Source sources = 8;
}
Trace
To know if some service is using some document it's possible to add traces that will be activated when certain conditions are met.
// The trace request parameters can be seen as a python conditional like
// (labels is empty or labels is a subset of A)
// and (document is empty or document == B)
// where
// A is a label got/watched
// B is a document got/watched
message TraceRequest {
// the root path of the namespace where trace the configuration.
string root_path = 1;
// the list of labels to trace, if some document with one of them
// is asked it will be returned
repeated Label labels = 2;
// the document to trace, in case it is not defined it will not be used
// in the query
string document = 3;
}
message TraceResponse {
enum Status {
RETURNED_ELEMENTS = 0;
ERROR = 1;
ADDED_WATCHER = 2;
EXISTING_WATCHER = 3;
REMOVED_WATCHER = 4;
}
Status status = 1;
uint64 namespace_id = 2;
uint32 version = 3;
repeated Label labels = 4;
string document = 5;
}
Run garbage collector
This is a administration method which allows for finer control of the garbage collector although it is not necessary to use it for normal use.
message RunGCRequest {
enum Type {
CACHE_GENERATION_0 = 0;
CACHE_GENERATION_1 = 1;
CACHE_GENERATION_2 = 2;
DEAD_POINTERS = 3;
NAMESPACES = 4;
VERSIONS = 5;
}
Type type = 1;
uint32 max_live_in_seconds = 2;
}
message RunGCResponse {
}
Metrics
The server expose some metrics through a prometheus client, some of this metrics are:
- The quantile 0.5, 0.9 and 0.99 of the internal threads (the stats are sampled).
- Some internal stats.