Right now we should have more than simple stack running, containing almost everything that we may need. Downside is that there are lot of movable parts and after all, we should have some way to find how (and if) everything is running.

There are a lot of solutions for that kind of tasks but we will use Prometheus

Procedure is (almost) well known: create docker volume, create prometheus configuration file, create prometheus service declaration in compose.yml, create route in Caddyfile:

Create volume:

docker volume create --label reco-prometheus reco-prometheus

create file prometheus.yml in that volume containg following lines: (file is here)

# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"
    static_configs:
      - targets: ["prometheus:9090"]

For detailed informations about prometheus configuration consult Prometheus configuration documentation

Now it’s time to update compose.yml, volumes part:

18    external: true
19  reco-prometheus:
20    external: true

and service declaration:

105  rabbit:
106    image: rabbitmq:3-management
107    volumes:
108      - reco-rabbitmq:/var/lib/rabbitmq
109
110  prometheus:
111    image: prom/prometheus:latest
112    user: root
113    entrypoint:
114      - "/bin/prometheus"
115      - "--log.level=warn"
116      - "--config.file=/etc/prometheus/prometheus.yml"
117      - "--storage.tsdb.retention.size=2GB"
118      - "--storage.tsdb.path=/prometheus"
119      - "--web.console.libraries=/usr/share/prometheus/console_libraries"
120      - "--web.console.templates=/usr/share/prometheus/consoles"
121    volumes:
122      - reco-prometheus:/etc/prometheus:rw
123      - reco-prometheus:/prometheus:rw

as usual, complete file is here

Now the Caddyfile part:

28    handle @rabbit {
29        reverse_proxy rabbit:15672
30    }
31
32    @prometheus host prometheus.domain.com
33
34    handle @prometheus {
35        reverse_proxy prometheus:9090
36    }

complete Caddyfile is here

As usual, one docker compose up -d && docker compose restart caddy should bring everything to life, and https://prometheus.domain.com should be accessible.

For a while we were adding required (or preferred) services that could be used as back services for some of the modules that we want to add later, and maybe that “later” has finally came.

We will, in a matter of time, add more back services (grafana, maybe some more) but for now, let’s add allegro module. Open your compose.yml file and add new service definition:

121    volumes:
122      - reco-prometheus:/etc/prometheus:rw
123      - reco-prometheus:/prometheus:rw
124
125  allegro:
126    image: registry.gitlab.com/tekelija/tekelija/tekelija-allegro:latest
127    environment:
128      - AUTHSERVER__ISSUER=https://authenticatomatic.domain.com
129      - SERILOG__MINIMUMLEVEL__DEFAULT=Debug
130      - SERILOG__WRITETO__SEQ__ARGS__SERVERURL=http://seq:5341
131      - SERILOG__MINIMUMLEVEL__OVERRIDE__OpenIddict=Information
132      - SERILOG__MINIMUMLEVEL__OVERRIDE__Microsoft=Information
133      - MESSAGEBUS__URL=rabbitmq://rabbit
134      - ALLEGRO__DATABASE=mssql
135      - ALLEGRO__CONNECTIONSTRING=server=mssql;initial catalog=recolj2;user id=sa;password=<YourStrong!Passw0rd>;encrypt=false;
136      - TZ=Europe/Ljubljana
137      - ASPNETCORE_FORWARDEDHEADERS_ENABLED=true
138    hostname: allegro
139    depends_on:
140      - authenticatomatic
141      - mssql
142      - seq

We are configuring allegro module using environment values (slightly more details on the topic are available on allegro module config page) and configuration from above implies that you have set up allegro database.

Caddyfile changes are, also, pretty standard:

34    handle @prometheus {
35        reverse_proxy prometheus:9090
36    }
37
38    @allegro host allegro.domain.com
39
40    handle @allegro {
41        reverse_proxy allegro
42    }
43
44    handle {
45         respond "Hello world!"
46    }

And again, docker compose up -d && docker compose restart caddy should pull latest allegro image from registry, instantiate container and restart caddy (to reread configuration).

After some time, if everything works as expected, https://allegro.domain.com/health should return healtcheck response. If not, see what has not been started using docker cli or gui tools.

If, by some chance, you get “healthy” response that would mean allegro module is working behind the proxy and it will use openid server (authenticatomatic) to authorize requests. If you want to experiment in details, get tekelija-allegro postman collection, import it and set variables to:

Variable name Value
baseUrl https://allegro.domain.com
authUrl https://authenticatomatic.domain.com

Oh, yes: remember seq? We have configured allegro module to log events level information and above so seq log should be pretty plentiful after few postman requests (we have configured allegro to log all received http requests with full details, configuration that is intended for either short time debugging or showtime, as it is the case for us)