Worker-aware targets
This tutorial explores worker-aware target configuration and applying worker filters.
Background
In the multi-datacenter and multi-cloud operating models, patterns of dividing up controllers, workers and targets into appropriate regions or networks is often desired to reduce latency or comply with security standards.
Worker-aware targets allow for specification of a boolean-expression filter against worker tags to control which workers are allowed to handle a given target's session. This pattern effectively "ties" a worker to a given target. A common example is allowing a single set of controllers to live in one region, and then place workers in many regions where the targets they proxy live.
This tutorial covers the process of defining worker tags, and applying target filters to force Boundary to only connect with workers available on the target's network.
Tutorial Contents
- Get setup
- Define worker tags
- Restart the workers
- Define worker filters
- Configure updated target filters
- Verify target availability
Prerequisites
Docker is installed
Docker-Compose is installed
Tip
Docker Desktop 20.10 and above include the Docker Compose binary, and does not require separate installation.
A Boundary binary greater than 0.1.5 in your
PATH
. This tutorial uses the 0.7.5 version of Boundary.Terraform 0.13.0 or greater in your
PATH
A psql binary greater than 13.0 in your
PATH
A redis-cli binary greater than 6.0 in your
PATH
A mysql binary greater than 8.0 in your
PATH
In addition to Docker, Terraform and the Boundary binary, it is important
that the psql
, redis
, and mysql
executables are available in your path to
complete this tutorial. Ensure they are properly installed before attempting to
connect to the database targets provided with the Docker lab environment.
Get setup
The demo environment provided for this tutorial includes a Docker Compose cluster that deploys these containers:
- A Boundary 0.7.5 controller server
- A Postgres database
- 2 worker instances (worker1, worker2)
- 3 database targets (postgres, mysql, redis)
The Terraform Boundary
Provider is
also used in this tutorial to easily provision resources using Docker, and must
be available in your PATH
when deploying the demo environment.
To learn more about the various Boundary components, refer back to the Start a Development Environment tutorial.
Deploy the lab environment
The lab environment can be downloaded or cloned from the following Github repository:
In your terminal, clone the repository to get the example files locally:
$ git clone git@github.com:hashicorp-education/learn-boundary-target-aware-workers.git
Move into the
learn-boundary-target-aware-workers
folder.$ cd learn-boundary-target-aware-workers
Ensure that you are in the correct directory by listing its contents.
$ ls -R1 README.md compose run terraform ./compose: controller.hcl docker-compose.yml worker1.hcl worker2.hcl ./terraform: main.tf outputs.tf versions.tf
The repository contains the following files:
run
: A script used to deploy and tear down the Docker-Compose configuration.compose/docker-compose.yml
: The Docker-Compose configuration file describing how to provision and network the boundary cluster and targets.compose/controller.hcl
: The controller configuration file.compose/worker1.hcl
: The worker1 configuration file.compose/worker2.hcl
: The worker2 configuration file.terraform/main.tf
: The terraform provisioning instructions using the Boundary provider.terraform/outputs.tf
: The terraform outputs file for printing user connection details.
This tutorial makes it easy to launch the test environment with the
run
script.$ ./run all~/learn-boundary-target-aware-workers/compose ~/learn-boundary-target-aware-workersCreating boundary_postgres_1 ... doneCreating boundary_mysql_1 ... doneCreating boundary_db_1 ... doneCreating boundary_redis_1 ... doneCreating boundary_db-init_1 ... doneCreating boundary_controller_1 ... doneCreating boundary_worker1_1 ... doneCreating boundary_worker2_1 ... done~/Projects/hashicorp/tutorial-repos/learn-boundary-target-aware-workers-test~/Projects/hashicorp/tutorial-repos/learn-boundary-target-aware-workers-test/terraform ~/Projects/hashicorp/ tutorial-repos/learn-boundary-target-aware-workers-test Initializing the backend... Initializing provider plugins...- Finding hashicorp/boundary versions matching "1.0.5"...- Installing hashicorp/boundary v1.0.5...- Installed hashicorp/boundary v1.0.5 (signed by HashiCorp) Terraform has created a lock file .terraform.lock.hcl to record the providerselections it made above. Include this file in your version control repositoryso that Terraform can guarantee to make the same selections by default whenyou run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to seeany changes that are required for your infrastructure. All Terraform commandsshould now work. If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary. ...... truncated output ...... Plan: 24 to add, 0 to change, 0 to destroy. Changes to Outputs: + username = { + user1 = { + auth_method_id = (known after apply) + description = "User account for user1" + id = (known after apply) + login_name = "user1" + name = "user1" + password = "password" + type = "password" } }boundary_scope.global: Creating...boundary_scope.global: Creation complete after 0s [id=global]boundary_scope.org: Creating...boundary_role.global_anon_listing: Creating...boundary_scope.org: Creation complete after 0s [id=o_zYT1ci74Xp]boundary_auth_method.password: Creating...boundary_scope.project: Creating...boundary_role.org_anon_listing: Creating...boundary_scope.project: Creation complete after 1s [id=p_uMmQ2Nmyzr]boundary_host_catalog.databases: Creating...boundary_auth_method.password: Creation complete after 1s [id=ampw_rZ6z1yjsNQ]boundary_account.user["user1"]: Creating...boundary_host_catalog.databases: Creation complete after 0s [id=hcst_Iws4PPJ0Cd]boundary_host.redis: Creating...boundary_host.localhost: Creating...boundary_host.postgres: Creating...boundary_host.mysql: Creating...boundary_account.user["user1"]: Creation complete after 1s [id=acctpw_z3wsUqIxl0]boundary_user.user["user1"]: Creating...boundary_role.global_anon_listing: Creation complete after 2s [id=r_jf0aBQrlq9]boundary_host.redis: Creation complete after 1s [id=hst_AfZWO0NmRH]boundary_host_set.redis: Creating...boundary_host.localhost: Creation complete after 1s [id=hst_nhrRtmEj9I]boundary_host_set.local: Creating...boundary_host.mysql: Creation complete after 2s [id=hst_Q5xGrzzScq]boundary_host_set.mysql: Creating...boundary_host.postgres: Creation complete after 2s [id=hst_Ldeq2F7kv8]boundary_host_set.postgres: Creating...boundary_user.user["user1"]: Creation complete after 2s [id=u_pOKwHpmtcU]boundary_role.org_admin: Creating...boundary_role.proj_admin: Creating...boundary_host_set.redis: Creation complete after 3s [id=hsst_S7l0x5zjuz]boundary_target.redis: Creating...boundary_host_set.local: Creation complete after 3s [id=hsst_eE4utKkMdf]boundary_target.ssh: Creating...boundary_target.db: Creating...boundary_host_set.mysql: Creation complete after 2s [id=hsst_pM0jH8cuYH]boundary_target.mysql: Creating...boundary_host_set.postgres: Creation complete after 2s [id=hsst_Tw28V0csEe]boundary_target.postgres: Creating...boundary_role.org_anon_listing: Creation complete after 6s [id=r_KSvb9NCBkv]boundary_target.redis: Creation complete after 2s [id=ttcp_SiRtRammJ5]boundary_target.ssh: Creation complete after 3s [id=ttcp_MIUmI7qIy1]boundary_target.db: Creation complete after 3s [id=ttcp_IuXDHJkWm2]boundary_target.mysql: Creation complete after 3s [id=ttcp_zZ8NOru2I7]boundary_target.postgres: Creation complete after 3s [id=ttcp_DsWKi9rV6V]boundary_role.org_admin: Creation complete after 5s [id=r_IlaiZHKfSy]boundary_role.proj_admin: Creation complete after 5s [id=r_H2qppOfYG9]╷│ Warning: Argument is deprecated││ with boundary_account.user,│ on main.tf line 69, in resource "boundary_account" "user":│ 69: login_name = lower(each.key)││ Will be removed in favor of using attributes parameter││ (and 20 more similar warnings elsewhere)╵ Apply complete! Resources: 24 added, 0 changed, 0 destroyed. Outputs: username = { "user1" = { "auth_method_id" = "ampw_woTDKKJXoq" "description" = "User account for user1" "id" = "acctpw_VaeyCEvMMY" "login_name" = "user1" "name" = "user1" "password" = "password" "type" = "password" }}
Any resource deprecation warnings in the output can safely be ignored.
The user details are printed in the shell output, and can also be viewed by inspecting the
terraform/terraform.tfstate
file. You will need the user1auth_method_id
to authenticate via the CLI and establish sessions later on.You can tear down the environment at any time by executing
./run cleanup
.To verify that the environment deployed correctly, print the running docker containers and notice the ones named with the prefix "boundary".
$ docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}"CONTAINER ID NAMES IMAGE31199b571926 boundary_worker1_1 hashicorp/boundary:0.7.5080bed30805a boundary_worker2_1 hashicorp/boundary:0.7.5a624347ebace boundary_controller_1 hashicorp/boundary:0.7.57027eaf0cca1 boundary_db_1 postgres8cb2d32d91d3 boundary_redis_1 redis6431a9aae6e0 boundary_mysql_1 mariadb8f6f095ce16a boundary_postgres_1 postgres
This tutorial focuses on the relationship between the controller, workers, and the three targets
boundary_postgres_1
,boundary_redis_1
, andboundary_mysql_1
.
Here is a diagram that shows the Boundary cluster network configuration. The targets are only able to communicate with the worker that lives on their same network.
The pre-defined network schema associates the workers with these targets:
- postgres: worker1
- mysql: worker1
- redis: worker2
If a target is misconfigured and associates a target with the incorrect worker, Boundary will produce an error stating that no workers are available to handle a connection request.
Query the targets
Start by authenticating using the CLI as user1
with the password of
password
. You will need user1's auth_method_id
printed when deploying the
lab environment.
Example: The auth method ID is ampw_1tT18L3AZd
in the following example.
$ boundary authenticate password -auth-method-id ampw_1tT18L3AZd -login-name user1Please enter the password (it will be hidden): <password> Authentication information: Account ID: acctpw_VaeyCEvMMY Auth Method ID: ampw_woTDKKJXoq Expiration Time: Tue, 08 Mar 2022 11:34:41 User ID: u_htjHguh4Ew The token was successfully stored in the chosen keyring and is not displayed here.
In this tutorial connections are established using the boundary connect
command. This requires the psql
, redis-cli
and mysql
CLI tools to be
available in your path, and additional connection options that are provided in
the examples. If these tools don't work for you, refer back to the tutorial
prerequisites and ensure they are installed and in your PATH
.
Try establishing a connection with the boundary_postgres_1
target. The target
name of postgres
was defined in the terraform/main.tf
file, and provisioned
with the Boundary Terraform provider.
In the example below, the psql CLI option -l
lists the available databases
after a connection is made. Re-run the command a few times until you are
prompted for a password, which is postgres
.
$ boundary connect postgres -target-name postgres -target-scope-name databases -username postgres -- -lpsql: error: connection to server at "127.0.0.1", port 53971 failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
After running the command a few times you should eventually be able to establish a session. If you were prompted for a password on the first try, re-run the command until you receive the error message above.
$ boundary connect postgres -target-name postgres -target-scope-name databases -username postgres -- -lPassword for user postgres: List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges-----------+----------+----------+------------+------------+----------------------- postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + | | | | | postgres=CTc/postgres test1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 |(4 rows)
The test1
database output confirms the existence of a test database, defined
in the compose/docker-compose.yml
Docker configuration for this sample target.
Note
If you are unable to connect to the postgres target at all, expand the Troubleshooting section below.
Occasionally the controller container may have issues initializing connection with the workers on first boot. If this occurs, you may be unable to establish a connection to any of the targets.
Remember, the following error is expected:
$ boundary connect postgres -target-name postgres -target-scope-name databases -username postgres -- -lpsql: error: connection to server at "127.0.0.1", port 54748 failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
The above error should occur occasionally, but sometime you should be prompted for the password for the postgres database.
If the controller container needs to be restarted, you may receive one of the following errors when connecting to the postgres target:
Error 1:
$ boundary connect postgres -target-name postgres -target-scope-name databases -username postgres -- -lError trying to authorize a session against target: error performing client request during AuthorizeSession call: Post "http://127.0.0.1:9200/v1/targets/postgres:authorize-session": EOF
Error 2:
$ boundary connect postgres -target-name postgres -target-scope-name databases -username postgres -- -lError dialing the worker: failed to WebSocket dial: failed to send handshake request: Get "https://localhost:9202/v1/proxy": EOFpsql: error: connection to server at "127.0.0.1", port 54231 failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.error fetching connection to send session teardown request to worker: Error dialing the worker: failed to WebSocket dial: failed to send handshake request: Get "https://localhost:9202/v1/proxy": EOF
You may also simply receive a 400 message that No workers are available to
handle this session, or all have been filtered
with every request that you
make.
$ boundary connect postgres -target-name postgres -target-scope-name databases -username postgres -- -lError from controller when performing authorize-session action against given target Error information: Kind: FailedPrecondition Message: No workers are available to handle this session, or all have been filtered. Status: 400 context: Error from controller when performing authorize-session action against given target
If any of these error messages persist after executing boundary connect
repeatedly, try restarting the boundary_controller_1
container once or twice:
$ docker restart boundary_controller_1boundary_controller_1
If you are still unable to establish a connection, re-provision the environment
by executing ./run cleanup
followed by ./run all
, and then try again.
The connection to the Postgres target is intermittent. What's going on?
Boundary's current configuration does not define which worker is allowed to handle a request.
Recall that the targets are isolated to the following network configuration:
- postgres: worker1
- mysql: worker1
- redis: worker2
When Boundary attempts to establish the a connection to the postgres target via
worker2, a psql: error
message is returned stating that the connection could
not be made because that target is not available on worker2's network.
Next, try querying the redis target. You should find similar behavior.
$ boundary connect -exec redis-cli -target-name redis -target-scope-name databases -- -h 127.0.0.1 -p {{boundary.port}} ping Proxy listening information: Address: 127.0.0.1 Connection Limit: -1 Expiration: Mon, 04 Oct 2021 18:28:16 MDT Port: 56456 Protocol: tcp Session ID: s_oZG2zE5Cr6Error: Server closed the connection
After trying a few times, you should be able to get a response of PONG
from
the redis target.
$ boundary connect -exec redis-cli -target-name redis -target-scope-name databases -- -h 127.0.0.1 -p {{boundary.port}} pingProxy listening information: Address: 127.0.0.1 Connection Limit: -1 Expiration: Mon, 04 Oct 2021 18:18:22 MDT Port: 54760 Protocol: tcp Session ID: s_GR5EFT7uEbPONG
Both the postgres and redis targets only allow for connections when the correct worker is selected by Boundary to handle the request.
The lab environment purposely misconfigured the mysql target to demonstrate what happens when worker filters are applied incorrectly. You will fix this issue by updating the tags and filters in the following sections.
Try to make a connection to the mysql target.
$ boundary connect -exec mysql -target-name mysql -target-scope-name databases -- -h 127.0.0.1 -P {{boundary.port}} --protocol=tcp -uroot -p"my-secret-pw" --execute="SHOW DATABASES;"Error from controller when performing authorize-session action against given target Error information: Kind: FailedPrecondition Message: No workers are available to handle this session, or all have been filtered. Status: 400 context: Error from controller when performing authorize-session action against given target
The request returns Message: No workers are available to handle this session, or all have been filtered.
In the following sections you will learn how to
correctly assign worker tags, and create filters that assign targets to the
worker available on the target's same network.
This tutorial makes use of the boundary connect
command to establish
sessions, but the Boundary Desktop
App could also be used to open
connections.
Worker tags
Worker tags allow for descriptions of where traffic should be routed and what targets they should be tied to. These tags are arbitrary, and left to the administrator to define and enforce.
Worker tag structure
Tags are defined as sets of key/value pairs in a worker's HCL configuration file.
worker { name = "web-prod-us-east-1" tags { region = ["us-east-1"] type = ["prod", "databases"] }}
HCL is JSON-compatible, so the tags can also be written in pure JSON. This has the benefit of mapping closely to the filter structure that will be implemented later.
{ "worker": { "name": "web-prod-us-east-1", "tags": { "region": ["us-east-1"], "type": ["prod", "databases"] } }}
Note that filter tags can also be specified using a pure key=value syntax.
worker { name = "web-prod-us-east-1" tags = ["region=us-east-1", "type=prod", "type=databases"]}
This format has some limitations, like the inability to use an =
as part of
the key name.
Define worker tags
The lab environment for this tutorial includes predefined worker tags. Here is
the contents of the worker stanza in the compose/worker1.hcl
file:
worker { name = "worker1" description = "A worker for a docker demo" address = "worker1" public_addr = "localhost:9202" controllers = ["boundary"] tags { region = ["us-east-1"] type = ["prod"] }}
With the current configuration, the tags that could be used for this worker are name: worker1
,
region: us-east-1
, and type: prod
.
For more specificity, these filters can be updated to specify what types of targets they communicate with. The postgres and mysql database targets should be handled by worker1, which lives on the same network.
Update the worker stanza in the compose/worker1.hcl
file to include three new
tags under the type key of database, postgres, and mysql.
worker { name = "worker1" description = "A worker for a docker demo" address = "worker1" public_addr = "localhost:9202" controllers = ["boundary"] tags { region = ["us-east-1"] type = ["prod", "database", "postgres", "mysql"] }}
Perhaps our dev environments live in the us-west-1 region, and use Redis for
testing purposes. Update the worker stanza in the compose/worker2.hcl
file
with type tags of database and redis.
worker { name = "worker2" description = "A worker for a docker demo" address = "worker2" public_addr = "localhost:9203" controllers = ["boundary"] tags { region = ["us-west-1"] type = ["dev", "database", "redis"] }}
Restart the workers
With the updated worker tags in place, restart the workers to deploy the new configuration file.
Note
In non-containerized environments it is sufficient to stop the
boundary server
process and restart it with an updated worker.hcl
file.
Restart worker1:
$ docker restart boundary_worker1_1boundary_worker1_1
And restart worker2:
$ docker restart boundary_worker2_1boundary_worker2_1
With the new tags in place it is time to apply filters to the targets.
Worker filters
Targets need filters applied to enable the controller to associate them with the appropriate worker. These filters can be applied via the CLI or using the Boundary Terraform provider.
Worker filter structure
Target filters are regular expressions that reference worker tags.
As an example, here is a simple filter that searches for workers that begin with
the name worker
.
"/name" matches "worker[12]"
This expression would return worker1 or worker2 in the final worker set.
More strict filtering can be easily applied, like the following expression that strictly matches targets named worker2.
"/name" == "worker2"
Complex filters can be created by grouping expressions. In the next example,
only workers with a name of worker1
and a region tag of us-east-1
would
match.
"/name" == "worker1" and "us-east-1" in "/tags/region"
And further complexity can be created by compounding expressions. These are
created by grouping an expression in parenthesis and using values like and
,
or
, and not
. In the last example workers must have a region tag of
us-east-1
and a name of worker1
, or a type tag of redis to match. In the
worker configurations defined above, this would allow worker1 or worker 2 to
handle the request.
("us-east-1" in "/tags/region" and "/name" == "worker1") or "redis" in "/tags/type"
If an expression fails due to a key not being found within the input data, the worker is not included in the final set. Ensure all workers that should match a given filter are populated with tags referenced in the filter string. As a corollary, it is not possible to distinguish between a worker that is not included due to the expression itself and a worker that did not have correct tags.
Define target worker filters
Next target filters will be applied for the postgres, redis and mysql targets.
Discover the target ids for the postgres, redis and mysql targets using recursive listing and filters.
$ boundary targets list -recursive -filter '"/item/name" matches "postgres|redis|mysql"' Target information: ID: ttcp_zBj1qQnAAH Scope ID: p_m03TsXTVtX Version: 2 Type: tcp Name: mysql Description: MySQL server Authorized Actions: no-op read update delete add-credential-libraries set-credential-libraries remove-credential-libraries add-credential-sources set-credential-sources remove-credential-sources authorize-session ID: ttcp_enGJe4fOlr Scope ID: p_m03TsXTVtX Version: 2 Type: tcp Name: postgres Description: postgres server Authorized Actions: no-op read update delete add-credential-libraries set-credential-libraries remove-credential-libraries add-credential-sources set-credential-sources remove-credential-sources authorize-session ID: ttcp_pnC02ZAY7O Scope ID: p_m03TsXTVtX Version: 2 Type: tcp Name: redis Description: Redis server Authorized Actions: no-op read update delete add-credential-libraries set-credential-libraries remove-credential-libraries add-credential-sources set-credential-sources remove-credential-sources authorize-session
Copy the postgres target ID, and update the target with a simple filter that
selects workers with a name of worker1
. When using the CLI a filter is
specified using the -worker-filter
option.
Double quotes are part of the filter syntax. When using the CLI, it is likely
easier to surround the -worker-filter
argument with single quotes. Otherwise
escape syntax needs to be used when surrounding the expression with double
quotes.
$ boundary targets update tcp -id ttcp_enGJe4fOlr -worker-filter='"/name" == "worker1"' Target information: Created Time: Fri, 01 Oct 2021 15:40:20 MDT Description: postgres server ID: ttcp_enGJe4fOlr Name: postgres Session Connection Limit: -1 Session Max Seconds: 20 Type: tcp Updated Time: Mon, 04 Oct 2021 17:27:07 MDT Version: 3 Worker Filter: "/name" == "worker1" Scope: ID: p_m03TsXTVtX Name: databases Parent Scope ID: o_1UXK0ttelh Type: project Authorized Actions: no-op read update delete add-credential-libraries set-credential-libraries remove-credential-libraries add-credential-sources set-credential-sources remove-credential-sources authorize-session Host Sources: Host Catalog ID: hcst_sIoiq5r6xB ID: hsst_AhdLqEubSL Host Catalog ID: hcst_sIoiq5r6xB ID: hsst_AhdLqEubSL Attributes: Default Port: 5432
Notice the Worker Filter:
line, and ensure it contains the correct filter
expression. If an expression fails due to a key not being found within the input
data the worker will not be included in the final set.
For redis apply the following filter:
"us-west-1" in "/tags/region" or "redis" in "/tags/type"
$ boundary targets update tcp -id ttcp_sEKSg5Bt1n -worker-filter='"us-west-1" in "/tags/region" or "redis" in "/tags/type"' Target information: Created Time: Fri, 01 Oct 2021 15:40:20 MDT Description: Redis server ID: ttcp_pnC02ZAY7O Name: redis Session Connection Limit: -1 Session Max Seconds: 20 Type: tcp Updated Time: Mon, 04 Oct 2021 17:27:47 MDT Version: 3 Worker Filter: "us-west-1" in "/tags/region" or "redis" in "/tags/type" Scope: ID: p_m03TsXTVtX Name: databases Parent Scope ID: o_1UXK0ttelh Type: project Authorized Actions: no-op read update delete add-credential-libraries set-credential-libraries remove-credential-libraries add-credential-sources set-credential-sources remove-credential-sources authorize-session Host Sources: Host Catalog ID: hcst_sIoiq5r6xB ID: hsst_r97dkNxrWu Host Catalog ID: hcst_sIoiq5r6xB ID: hsst_r97dkNxrWu Attributes: Default Port: 6379
For redis this will return worker2, which has a region tag of us-west-1
and a
type tag of redis
.
And for mysql implement this filter:
"/name" == "worker1" or ("prod" in "/tags/type" and "database" in "/tags/type")"
$ boundary targets update tcp -id ttcp_zBj1qQnAAH -worker-filter='"/name" == "worker1" or ("prod" in "/tags/type" and "database" in "/tags/type")' Target information: Created Time: Fri, 01 Oct 2021 15:40:19 MDT Description: MySQL server ID: ttcp_zBj1qQnAAH Name: mysql Session Connection Limit: -1 Session Max Seconds: 100 Type: tcp Updated Time: Mon, 04 Oct 2021 17:29:30 MDT Version: 3 Worker Filter: "/name" == "worker1" or ("prod" in "/tags/type" and "database" in "/tags/type") Scope: ID: p_m03TsXTVtX Name: databases Parent Scope ID: o_1UXK0ttelh Type: project Authorized Actions: no-op read update delete add-credential-libraries set-credential-libraries remove-credential-libraries add-credential-sources set-credential-sources remove-credential-sources authorize-session Host Sources: Host Catalog ID: hcst_sIoiq5r6xB ID: hsst_du9cw25nXV Host Catalog ID: hcst_sIoiq5r6xB ID: hsst_du9cw25nXV Attributes: Default Port: 3306
For the mysql target only worker1 is allowed to handle requests, because the
filter allows a name tag of worker1
or a type tag of prod
and database
.
Verify target availability
With the workers tagged and filters in place for the targets, read the target data to ensure the filters were applied correctly.
First use recursive listing and a filter to find the target ids postgres, redis, and mysql.
$ boundary targets list -recursive -filter '"/item/name" matches "postgres|redis|mysql"' Target information: ID: ttcp_enGJe4fOlr Scope ID: p_m03TsXTVtX Version: 3 Type: tcp Name: postgres Description: postgres server Authorized Actions: no-op read update delete add-credential-libraries set-credential-libraries remove-credential-libraries add-credential-sources set-credential-sources remove-credential-sources authorize-session ID: ttcp_pnC02ZAY7O Scope ID: p_m03TsXTVtX Version: 3 Type: tcp Name: redis Description: Redis server Authorized Actions: no-op read update delete add-credential-libraries set-credential-libraries remove-credential-libraries add-credential-sources set-credential-sources remove-credential-sources authorize-session ID: ttcp_zBj1qQnAAH Scope ID: p_m03TsXTVtX Version: 3 Type: tcp Name: mysql Description: MySQL server Authorized Actions: no-op read update delete add-credential-libraries set-credential-libraries remove-credential-libraries add-credential-sources set-credential-sources remove-credential-sources authorize-session
Copy the target id for postgres and verify that it was set correctly by reading the target details.
$ boundary targets read -id ttcp_22X4blwMb1 Target information: Created Time: Fri, 01 Oct 2021 15:40:20 MDT Description: postgres server ID: ttcp_enGJe4fOlr Name: postgres Session Connection Limit: -1 Session Max Seconds: 20 Type: tcp Updated Time: Mon, 04 Oct 2021 17:27:07 MDT Version: 3 Worker Filter: "/name" == "worker1" Scope: ID: p_m03TsXTVtX Name: databases Parent Scope ID: o_1UXK0ttelh Type: project Authorized Actions: no-op read update delete add-credential-libraries set-credential-libraries remove-credential-libraries add-credential-sources set-credential-sources remove-credential-sources authorize-session Host Sources: Host Catalog ID: hcst_sIoiq5r6xB ID: hsst_AhdLqEubSL Host Catalog ID: hcst_sIoiq5r6xB ID: hsst_AhdLqEubSL Attributes: Default Port: 5432
Look at the Worker Filter:
line and verify that the filter query is correct.
Repeat this process for the redis
and mysql
targets.
Establish sessions
Now the filters can be validated by establishing sessions using boundary connect
.
Verify that a session can be reliably established to the postgres target,
entering the password postgres
when prompted.
$ boundary connect postgres -target-name postgres -target-scope-name databases -username postgres -- -lPassword for user postgres: List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges-----------+----------+----------+------------+------------+----------------------- postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + | | | | | postgres=CTc/postgres test1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 |(4 rows)
Verify that a session can be reliably established to the redis target.
Notice that the proxy information is displayed prior to the client response when
using boundary connect -exec
.
$ boundary connect -exec redis-cli -target-name redis -target-scope-name databases -- -h 127.0.0.1 -p {{boundary.port}} ping Proxy listening information: Address: 127.0.0.1 Connection Limit: -1 Expiration: Mon, 04 Oct 2021 17:56:11 MDT Port: 52015 Protocol: tcp Session ID: s_qYgkJk8KdBPONG
Verify that a session can be reliably established to the mysql target.
$ boundary connect -exec mysql -target-name mysql -target-scope-name databases -- -h 127.0.0.1 -P {{boundary.port}} --protocol=tcp -uroot -p"my-secret-pw" --execute="SHOW DATABASES;" Proxy listening information: Address: 127.0.0.1 Connection Limit: -1 Expiration: Mon, 04 Oct 2021 17:57:15 MDT Port: 51958 Protocol: tcp Session ID: s_DdWBdvTp6Zmysql: [Warning] Using a password on the command line interface can be insecure.+--------------------+| Database |+--------------------+| information_schema || mysql || performance_schema || sys |+--------------------+
If you are not able to establish sessions to these targets, carefully check the filters applied in the previous section, and re-define them if any are set incorrectly. If the filters look correct, verify that the tags were properly applied, and that the workers were restarted to apply the new configuration.
Cleanup and teardown
The Boundary cluster containers and network resources can be cleaned up
using the provided run
script.
$ ./run cleanup~/target-aware-workers/compose ~/target-aware-workersStopping boundary_worker1_1 ... doneStopping boundary_worker2_1 ... doneStopping boundary_controller_1 ... doneStopping boundary_redis_1 ... doneStopping boundary_db_1 ... doneStopping boundary_mysql_1 ... doneStopping boundary_postgres_1 ... doneGoing to remove boundary_worker1_1, boundary_worker2_1, boundary_controller_1, boundary_db-init_1, boundary_redis_1, boundary_db_1, boundary_mysql_1, boundary_postgres_1Removing boundary_worker1_1 ... doneRemoving boundary_worker2_1 ... doneRemoving boundary_controller_1 ... doneRemoving boundary_db-init_1 ... doneRemoving boundary_redis_1 ... doneRemoving boundary_db_1 ... doneRemoving boundary_mysql_1 ... doneRemoving boundary_postgres_1 ... done
Check your work with a quick docker ps
and ensure there are no more containers
with the boundary_
prefix leftover. If unexpected containers still exist,
execute docker rm -f CONTAINER_NAME
against each to remove them.