121 Commits

Author SHA1 Message Date
8e3bafdafe wip: xactmon: docs 2024-08-17 11:01:31 -05:00
7dffb5195a v-m: alertmanager: Group disk usage alerts
Some machines have the same volume mounted multiple times (e.g.
container hosts, BURP).  Alerts will fire for all of these
simultaneously when the filesystem usage passes the threshold.  To avoid
getting spammed with a bunch of messages about the same filesystem,
we'll group alerts from the same machine.
2024-08-17 10:59:05 -05:00
02001f61db v-m/scrape: webistes: Stop scraping Matrix
I'm not using Matrix for anything anymore, and it seems to have gone
offline.  I haven't fully decommissioned it yet, but the Blackbox scrape
is failing, so I'll just disable that bit for now.
2024-08-17 10:57:22 -05:00
c7e4baa466 v-m: scrape: Remove nvr2.p.b Zincati scrape target
I've redeployed *nvr2.pyrocufflink.blue* as Fedora Linux, so it does not
run Zincati anymore.
2024-08-17 10:56:06 -05:00
1a631bf366 v-m: scrape: Remove serial1.p.b
This machine never worked correctly; the USB-RS232 adapters would stop
working randomly (and of course it would be whenever I needed to
actually use them).  I thought it was something wrong with the server
itself (a Raspberry Pi 3), but the same thing happened when I tried
using a Pi 4.

The new backup server has a plethora of on-board RS-232 ports, so I'm
going to use it as the serial console server, too.
2024-08-17 10:54:21 -05:00
6f7f09de85 v-m: scrape: Update Unifi server target
I've rebuilt the Unifi Network controller machine (again);
*unifi3.pyrocufflink.blue* has replaced *unifi2.p.b*.  The
`unifi_exporter` no longer works with the latest version of Unifi
Network, so it's not deployed on the new machine.
2024-08-17 10:52:51 -05:00
809676f691 v-m: alerts: Add Longhorn alerts 2024-08-17 10:51:13 -05:00
9977bb3de4 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-08-06 08:03:42 -05:00
dcd3f898c7 xactmon: Deploy Invoice Ninja importer for HLC
Bank notifications sent to Tabitha's mailbox are now processed by
`xactmon` and imported into Invoice Ninja as expenses for Hatch Learning
Center.
2024-08-03 13:39:17 -05:00
5b34547730 h-a: Config Zigbee2MQTT w/ env vars
Zigbee2MQTT commits the cardinal sin of storing state in its
configuration file.  This means the file has to be writable and thus
stored in persistent storage rather than in a ConfigMap.  As a
consequence, making changes to the configuration when the application is
not running is rather difficult.  Case in point: when I added the
internal alias for _mqtt.pyrocufflink.blue_ pointing to the in-cluster
service, Zigbee2MQTT became unable to connect to the broker because it
was using the node port instead of the internal port.  Since it could
not connect to the broker, it refused to start, and thus the container
would not stay running long enough to fix the configuration to point
to the correct port.

Fortunately, Zigbee2MQTT also allows configuring settings via
environment variables, which can be managed with a ConfigMap.  Luckily,
the values read from environment variables override those from the
configuration file, so pointing to the correct broker port with the
environment variable was sufficient to allow the application to start.
2024-08-01 09:27:52 -05:00
b366532c88 cert-manager, step-ca: Bypass cluster DNS
Having name overrides for in-cluster services breaks ACME challenges,
because the server tries to connect to the Service instead of the
Ingress.  To fix this, we need to configure both _cert-manager_ and
_step-ca_ to *only* resolve names using the network-wide DNS server.
2024-07-29 20:58:18 -05:00
a785fcec73 sshca: Allow Jenkins jobs to restart the Deployment
The Jenkins job for the SSHCA Server restarts the Deployment after
building a new container image.
2024-07-27 13:10:20 -05:00
a26857819a step-ca: Add Ingress resource
It turns out, `step ca renew` _can_ renew certificates without mTLS; it
has a `--mtls=false` command-line argument that configures it to use
a JWT signed by the certificate, instead of using the certificate at
the transport layer.  This allows clients to renew their certificates
without needing another authentication mechanism, even with the
TLS-terminating proxy.
2024-07-27 13:07:26 -05:00
079c3871b9 invoice-ninja: Fix document upload feature
Invoice Ninja allows attaching documents to invoices, payments,
expenses, etc.  Tabitha wants to use this feature to attach receipts for
her expenses, but the photos her phone takes of them are too large for
the default nginx client body limit.  We can raise this limit on the
ingress, but we also need to raise it on the "inner" nginx.
2024-07-27 13:04:02 -05:00
e74a6b3142 invoice-ninja: Run in a mutable container
The Invoice Ninja container is not designed to be immutable at all; it
makes a bunch of changes to its own contents when it starts up.
Notably, it copies the contents of the `public` and `storage`
directories from the container image to the persistent volume _and then
deletes the source_.  Additionally, being a Laravel application, it
needs write access to its own code for caching, etc.  Previously, the
`init.sh` script copied the entire `app` directory to a temporary
directory, and then the runtime container mounted that volume over the
top of the original location.  This allowed the root filesystem of the
container to be read-only, while the `app` directory was still mutable.
Unfortunately, this makes the startup process incredibly slow, as it
takes a couple of minutes to copy the whole application.  It's also
pretty pointless, because the application runs as an unprivileged
process, so it wouldn't have write access to the rest of the filesystem
anyway.  As such, I've decided to remove the `readOnlyRootFilesytem`
restriction, and allow the container to run as upstream intends, albeit
begrudgingly.
2024-07-27 12:57:02 -05:00
78cd26c827 v-m: Scrape metrics from RabbitMQ 2024-07-26 20:59:00 -05:00
e56a38c034 cert-manager: Add dch-ca issuer
In-cluster services can now get certificates signed by the DCH CA via
`step-ca`.  This issuer uses ACME with the HTTP-01 challenge, so it
can only issue certificates for names in the _pyrocufflink.blue_ zone
that point to the ingress controllers.
2024-07-26 20:59:00 -05:00
54187176ba ingress: Proxy AMQP
Passing port 5671 through the ingress-nginx proxy to the `rabbitmq`
service will allow clients outside the cluster to connect to it.

While we're at it, we'll move the definition of the `tcp-services`
ConfigMap to its own file to make it easier to maintain.
2024-07-26 20:59:00 -05:00
1a1d8ff27d rabbitmq: Deploy RabbitMQ Server
RabbitMQ is an AMQP message broker.  It will be used by `xactmon` to
pass messages between the components.

Although RabbitMQ can be deployed in a high-availability cluster, we
don't really need that level of robustness for `xactmon`, so we will
just run a single instance.  Deploying a single-host RabbitMQ server
is pretty straightforward.

We're using mTLS authentication; clients need to have a certificate
issued by the *RabbitMQ CA* in order to connect to the message broker.
The `rabbitmq-ca` _cert-manager_ ClusterIssuer issues these certificates
for in-cluster services like `xactmon`.
2024-07-26 20:59:00 -05:00
a04a2b5334 xactmon: Deploy xactmon
`xactmon` is a new tool I developed to parse transaction notifications
from banks and automatically import them into my personal finance
tracker.  It is designed in a modular fashion, composed of three main
components:

* Receiver
* Processor
* Importer

Components communicate with one another using an AMQP exchange.
Hypothetically, there could be multipel implementations of the receiver
and importer components.  Right now, there is only a JMAP receiver,
which fetches email messages (from Fastmail), and a Firefly III
importer.  The processor is a singleton, handling notifications from the
receiver, parsing them into a normalized format, and passing them on to
the importer.  It uses a set of rules to decide how to parse the
messages, and supports using either a regular expression with named
capture groups or an Awk script to extract the relevant information.
2024-07-26 20:53:19 -05:00
ccc46288c2 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-07-22 08:12:11 -05:00
f4d41c0ec7 invoice-ninja: Add Ingress for HLC client portal
Tabitha wants to use the Invoice Ninja Client Portal and Stripe
integration for customer payments.
2024-07-14 15:41:14 -05:00
989556d458 cert-manager: Update to v1.14.5 2024-07-14 15:14:44 -05:00
74fa9264df xactfetch: Configure secretsocket
The `xactfetch` script now uses a helper tool, `secretsocket` to
handle looking up secrets.  This tool supports various secret source
types, including files, environment variables, and external commands.
Separating this functionality out of the main script makes it a lot
more flexible and pluggable.  It's main purpose, though, was actually
to allow `xactfetch` to run in a container while communicating with
`rbw` outside that container, specifically for development puposes.

The `secretsocket` tool reads its configuration from a TOML document.
This document defines the secrets the tool handles, and how to look
them up.

Note that the `xactfetch` container image no longer defines the
`XDG_CONFIG_HOME` environment variable, as it uses Chromium instead of
Firefox now, and the former does not work with a read-only config
directory.  As such, we have to mount the `rbw` configuration in the
default location.
2024-07-11 22:49:07 -05:00
71ca910ef7 home-assistant: Add Tabitha's HLC calendar 2024-07-11 22:15:56 -05:00
ee00412bf6 xactfetch: Use separate CronJobs per bank
Usually, `xactfetch` will only fail for one bank or the other.  Rarely
do we want to redownload the data from both banks just because one
failed.  The latest version of `xactfetch` supports specifying a bank
name as a CLI argument, so now we can define separate jobs for each
bank.  Then, when one Job fails, only that one will be retried later.

It's kind of a bummer that it's so repetitive to define two CronJobs
that differ by only a single command-line argument.  I suppose that's
a good argument for using one of the preprocessor tools like Jsonnet
or KCL.
2024-07-11 22:09:27 -05:00
c741d04d54 xactfetch: Skip wait for manual runs
When the `xactfetch` CronJob is triggered manually, it will now skip
the `sleep` step.  Presumably, whoever triggered it wants the script
to run _right now_, probably to diagnose a problem.
2024-07-11 22:07:54 -05:00
8cb292a4b2 v-m: alerts: Add alert for temperatures
After the incident this week with the CPU overheating on _vmhost1_, I
want to make sure I know as soon as possible when anything is starting
to get too hot.
2024-07-11 22:07:27 -05:00
8113e5a47f v-m: Fix syntax in AlertManager config
The `group_by` field takes a list of label names, rather than a single
string.
2024-07-06 07:13:27 -05:00
952ab9f264 v-m: alertmanager: Group camera notifications
When Frigate is down, multiple alerts are generated for each camera, as
Home Assistant creates camera entities for each tracked object.  This is
extremely annoying, not to mention unnecessary.  To address this, we'll
configure AlertManager to send a single notification for alerts in the
group.
2024-07-05 07:30:30 -05:00
9b26753e73 v-m: alerts: Add durations to spammy alerts
Let's avoid sending alerts immediately when something is unavailable,
because the issue might be transient and will resolve itself shortly.
2024-07-05 07:23:38 -05:00
fa80b15a71 jenkins: Remove Argo CD sync hook
Since Jenkins no longer uses a Longhorn volume, this sync hook is not
useful.
2024-07-04 06:53:58 -05:00
248a9a5ae9 v-m: Scrape PostgreSQL exporter
The [postgres exporter][0] exposes metrics about the operation and
performance of a PostgreSQL server.  It's currently deployed on
_db0.pyrocufflink.blue_, the primary server of the main PostgreSQL
cluster.

[0]: https://github.com/prometheus-community/postgres_exporter
2024-07-02 18:16:05 -05:00
215b2c6975 home-assistant: Use external PostgreSQL server
Home Assistant uses PostgreSQL for recording the history of entity
states.  Since we had been using the in-cluster database server for
this, the data were migrated to the new external PostgreSQL server
automatically when the backup from the former was restored on the
latter.  It follows, then, that we can point Home Assistant to the
new server as well.

Home Assistant uses SQLAlchemy, which in turn uses _libpq_ via
_psycopg_, as a client for PostgreSQL.  It doesn't expose any
configuration parameters beyond the "database URL" directly, but we
can use the standard environment variables to specify the certificate
and private key for authentication.  In fact, the empty `postgresql://`
URL is sufficient, and indicates that _all_ of the connection parameters
should be taken from environment variables.  This makes specifying the
parameters for both the `wait-for-db` init container and the main
container take the exact same environment variables, so we can use
YAML anchors to share their definitions.
2024-07-02 18:16:05 -05:00
a269f8a1ae firefly-iii: Connect to external PostgreSQL
Since the new database server outside the Kubernetes cluster, created
for Authelia, was seeded from a backup of the in-cluster server, it
already contained the data from Firefly-III as well.  Thus, we can
switch Firefly-III to using it, too.

The documentation for Firefly-III does not mention anything about how
to configure it to use certificate-based authentication for PostgreSQL,
as is required by the new server.  Fortunately, it ultimately uses
_libpq_, so the standard `PG...` environment variables work fine.  We
just need a certificate issued by the _postgresql-ca_ ClusterIssuer and
the _DCH Root CA_ certificate mounted in the Firefly-III container.
2024-07-02 18:16:05 -05:00
92497004be authelia: Point to external PostgreSQL server
If there is an issue with the in-cluster database server, accessing the
Kubernetes API becomes impossible by normal means.  This is because the
Kubernetes API uses Authelia for authentication and authorization, and
Authelia relies on the in-cluster database server.  To solve this
chicken-and-egg scenario, I've set up a dedicated PostgreSQL database
server on a virtual machine, totally external to the Kubernetes cluster.

With this commit, I have changed the Authelia configuration to point at
this new database server.  The contents of the new database server were
restored from a backup from the in-cluster server, so of Authelia's
state was migrated automatically.  Thus, updating the configuration is
all that is necessary to switch to using it.

The new server uses certificate-based authentication.  In order for
Authelia to access it, it needs a certificate issued by the
_postgresql-ca_ ClusterIssuer, managed by _cert-manager_.  Although the
environment variables for pointing to the certificate and private key
are not listed explicitly in the Authelia documentation, their names
can be inferred from the configuration document schema and work as
expected.
2024-07-02 18:16:05 -05:00
a8ef4c7a80 v-m: Add component labels to configmaps
Adding a `component` label to each ConfigMap will make it possible to
target them specifically, e.g. with `kubectl apply -l`.
2024-07-02 18:16:05 -05:00
65e53ad16d v-m: Scrape Zinciti metrics from K8s nodes
All the Kubernetes nodes (except *k8s-ctrl0*) are now running Fedora
CoreOS.  We can therefore use the Kubernetes API to discover scrape
targets for the Zincati job.
2024-07-02 18:16:05 -05:00
31345bee7b home-assistant: Add Pool Time WebDAV calendar
I've created a _Pool Time_ calendar in Nextcloud that we can use to
mark when people are expected to be in the pool.  Using this, we can
configure the "someone is in the pool" alert not to fire during times
when we know people will be in the pool.  This will make it much less
annoying on HLC pool days.
2024-07-02 18:16:05 -05:00
2d7fec1cdf v-m: vmstorage: Add pod anti-affinity
One of the reasons for moving to 4 `vmstorage` replicas was to ensure
that the load was spread evenly between the physical VM host machines.
To ensure that is the case as much as possible, we need to keep one
pod per Kubernetes node.
2024-06-26 18:29:49 -05:00
f7f408ca8c v-m: Redo vmstorage persistent volumes
Longhorn does not work well for very large volumes.  It takes ages to
synchronize/rebuild them when migrating between nodes, which happens
all too frequently.  This consumes a lot of resources, which impacts
the operation of the rest of the cluster, and can cause a cascading
failure in some circumstances.

Now that the cluster is set up to be able to mount storage directly from
the Synology, it makes sense to move the Victoria Metrics data there as
well.  Similar to how I did this with Jenkins, I created
PersistentVolume resources that map to iSCSI volumes, and patched the
PersistentVolumeClaims (or rather the template for them defined by the
StatefulSet) to use these.  Each `vmstorage` pod then gets an iSCSI
LUN, bypassing both Longhorn and QEMU to write directly to the NAS.

The migration process was relatively straightforwrad.  I started by
scaling down the `vminsert` Deployment so the `vmagent` pods would
queue the metrics they had collected while the storage layer was down.
Next, I created a [native][0] export of all the time series in the
database.  Then, I deleted the `vmstorage` StatefulSet and its
associated PVCs.  Finally, I applied the updated configuration,
including the new PVs and patched PVCs, and brought the `vminsert`
pods back online.  Once everything was up and running, I re-imported
the exported data.

[0]: https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-export-data-in-native-format
2024-06-26 18:29:49 -05:00
0f24341e5c collectd: Add DaemonSet for collectd
Since all the nodes in the cluster run Fedora CoreOS now, we can
deploy collectd as a container, managed by a DaemonSet.

Note that while _collectd_ has to run as _root_ in order to collect
a lot of metrics, it should not run with all privileges.  It does need
to run as a "super-privileged container" (`spc_t` SELinux domain), but
it does _not_ need most kernel capabilities.
2024-06-26 18:29:49 -05:00
ab458df415 v-m/vmstorage: Start pods in parallel
By default, Kubernetes waits for each pod in a StatefulSet to become
"ready" before starting the next one.  If there is a problem starting
that pod, e.g. data corruption, then the others will never start.  This
sort of defeats the purpose of having multiple replicas.  Fortunately,
we can configure the pod management policy to start all the pods at
once, regardless of the status of any individual pod.  This way, if
there is a problem with the first pod, the others will still come up
and serve whatever data they have.
2024-06-26 18:29:49 -05:00
14be633843 v-m: Scrape Restic exporter 2024-06-26 18:29:49 -05:00
5079599423 restic-exporter: Deploy Restic Prometheus exporter
The [restic-exporter][0] exposes metrics about Restic snapshots as
Prometheus metrics.  This allows us to get similar data as we have for
BURP backups.  Chiefly important among the metrics are last backup time
and size, which we can use to determine if backups are working
correctly.

[0]: https://github.com/ngosang/restic-exporter
2024-06-26 18:29:49 -05:00
ebcf9e3d42 authelia: Scale up to 2 replicas
Since Authelia is stateless, we can run a second instance to improve
availability.
2024-06-26 18:29:49 -05:00
21e8ad2afd home-assistant: Add commands to control photoframe
The digital photo frame in the kitchen is powered by a server service,
which exposes a minimal HTTP API.  Using this API, we can e.g. advance
or backtrack the displayed photo.  Exposing `rest_command` services
for these operations allows us to add buttons to dashboards to control
the frame.
2024-06-26 18:29:49 -05:00
1c4b32925e v-m: Use dynamic discovery for some collectd nodes
We don't need to explicitly specify every single host individually.
Domain controllers, for example, are registered in DNS with SRV records.
Kubernetes nodes, of course, can be discovered using the Kubernetes API.
Both of these classes of nodes change frequently, so discovering them
dynamically is convenient.
2024-06-26 18:29:49 -05:00
98651cf9d9 jenkins: Force iSCSI volume on specific nodes
Instead of routing iSCSI traffic from the Kubernetes network, through
the firewall, to the storage network, nodes now have a second network
adapter connected to directly to the storage network.  The nodes with
such an adapter are labelled `network.du5t1n.me/storage`, so we can pin
the Jenkins PersistentVolume to them via a node affinity rule.
2024-06-26 18:29:49 -05:00
a2225e583e paperless-ngx: Use volume claim template for redis
Using a volume claim template to define the persistent volume claim for
the Redis pod has two advantages: first, it enables using clustered
Redis, if we decide that becomes necessary, and second, it makes
deleteing and recreating the volume easier in the case of data
corruption.  Simply scale down the StatefulSet to 0, delete the PVC, and
scale the StatefulSet back up.
2024-06-26 18:29:49 -05:00
02c88700f7 firefly-iii: Use volume claim template for redis
Using a volume claim template to define the persistent volume claim for
the Redis pod has two advantages: first, it enables using clustered
Redis, if we decide that becomes necessary, and second, it makes
deleteing and recreating the volume easier in the case of data
corruption.  Simply scale down the StatefulSet to 0, delete the PVC, and
scale the StatefulSet back up.
2024-06-26 18:29:49 -05:00
2ce1821667 step-ca: Allow longer validity for ACME certificates
By default, step-ca issues certificates that are valid for only one day.
This means that clients need to have multiple renew attempts scheduled
throughout the day, otherwise, missing one could mean having their
certificates expire.  This is unnecessary, and not even possible in all
cases, so let's make the default validity period longer and avoid the
issue.
2024-06-26 18:29:49 -05:00
858bad55ca grafana: Trust dch-root-ca for LDAP connections
The LDAP servers now use certificates signed by _DCH CA R2_, so the
_DCH Root CA R2_ CA needs to be trusted in order to communicate with
them.
2024-06-26 18:29:49 -05:00
e71156bcec authelia: Mount dch-root-ca
The LDAP servers now use certificates signed by _DCH CA R2_, so the
_DCH Root CA R2_ CA needs to be trusted in order to communicate with
them.
2024-06-26 18:29:49 -05:00
b8015c0bed v-m: blackbox: Force TCP probe to IPv4
Since I added an IPv6 ULA prefix to the "main" VLAN (to allow
communicating with the Synology directly), the domain controllers now
have AAAA records.  This causes the `sambadc` screpe job to fail because
Blackbox Exporter prefers IPv6 by default, but Kubernetes pods do not
have IPv6 addreses.
2024-06-26 18:29:49 -05:00
7f3287297b jenkins: Migrate to iSCSI persistent volume
Managing the Jenkins volume with Longhorn has become increasingly
problematic.  Because of its large size, whenever Longhorn needs to
rebuild/replicate it (which happens often for no apparent reason), it
can take several hours.  While the synchronization is happening, the
entire cluster suffers from degraded performance.

Instead of using Longhorn, I've decided to try storing the data directly
on the Synology NAS and expose it to Kubernetes via iSCSI.  The Synology
offers many of the same features as Longhorn, including
snapshots/rollbacks and backups.  Using the NAS allows the volume to be
available to any Kubernetes node, without keeping multiple copies of
the data.

In order to expose the iSCSI service on the NAS to the Kubernetes nodes,
I had to make the storage VLAN routable.  I kept it as IPv6-only,
though, as an extra precaution against unauthorized access.  The
firewall only allows nodes on the Kubernetes network to access the NAS
via iSCSI.

I originally tried proxying the iSCSI connection via the VM hosts,
however, this failed because of how iSCSI target discovery works.  The
provided "target host" is really only used to identify available LUNs;
follow-up communication is done with the IP address returned by the
discovery process.  Since the NAS would return its IP address, which
differed from the proxy address, the connection would fail.  Thus, I
resorted to reconfiguring the storage network and connecting directly
to the NAS.

To migrate the contents of the volume, I temporarily created a PVC with
a different name and bound it to the iSCSI PersistentVolume.  Using a
pod with both the original PVC and the new PVC mounted, I used `rsync`
to copy the data.  Once the copy completed, I deleted the Pod and both
PVCs, then created a new PVC with the original name (i.e. `jenkins`),
bound to the iSCSI PV.  While doing this, Longhorn, for some reason,
kept re-creating the PVC whenever I would delete it, no matter how I
requested the deletion.  Deleting the PV, the PVC, or the Volume, using
either the Kubernetes API or the Longhorn UI, they would all get
recreated almost immediately.  Fortunately, there was actually enough of
a delay after deleting it before Longhorn would recreate it that I was
able to create the new PVC manually.  Once I did that, Longhorn seemed
to give up.
2024-06-23 09:53:15 -05:00
c3c9c0c555 kitchen: Run as non-root user
The *kitchen* server service does not need to run as root or have any
access to writable storage.
2024-06-06 11:03:42 -05:00
b4d6dfeb07 kitchen: Re-enable graceful shutdown timeout
Version 0.5.1 fixes the issue with `uvicorn` hanging on shutdown because
of the WebSocket message queue.
2024-06-06 10:09:37 -05:00
7b8b11111e kitchen: Updates for v0.5
Kitchen v0.5 a few changes that affect the deployment:

* The Bored Board is now backed by MQTT
* The pool temperature is now displayed in the weather pane
* The container image is now based on Fedora and includes its own time
  zone database and root CA bundle
* The websocket server prevents the process from stopping correctly
  unless the graceful shutdown feature of `uvicorn` is disabled
2024-06-05 22:04:55 -05:00
48f20eac07 v-m: Scrape metrics from fleetlock 2024-05-31 15:18:55 -05:00
fc66058251 fleetlock: Deploy Zincati fleet lock manager
[fleetlock] is an implementation of the Zincati FleetLock reboot
coordination protocol.  It only works for machines that are Kubernetes
nodes, but it does enable safe rolling updates for those machines.
Specifically, when a node acquires a lock (backed by a Kubernetes
Lease), it cordons that node and evicts pods from it.  After the node
has rebooted into the new version of Fedora CoreOS, it uncordons the
node and releases the lock.

[fleetlock]: https://github.com/poseidon/fleetlock
2024-05-31 15:18:01 -05:00
365334cea7 xactfetch: Provide Vaultwarden password for sync
Vaultwarden has started prompting for the master password occasionally
when syncing the vault.  Thus, we need to make sure it is available in
the _sync_ container, by mounting the secret and providing the
`PINENTRY_PASSWORD_FILE` environment variable.
2024-05-29 09:36:30 -05:00
8939c1d02c v-m/scrape: Scrape unifi2.p.b
*unifi2.pyrocufflink.blue* is a Fedora CoreOS host, so it runs
*collectd*, *Promtail*, and *Zincati*.
2024-05-26 11:48:59 -05:00
61bfd8ff1a keyserv: Add age keys for unifi2
This key encrypts the password for *unifi_exporter* to connect to Unifi
Network.
2024-05-26 11:48:12 -05:00
3b74c3d508 v-m: Scrape metrics from Paperless-ngx Flower 2024-05-22 15:51:07 -05:00
f83783fd58 paperless-ngx: Enable Flower
Flower is the monitoring agent for Celery.  It has a web UI, but more
importantly, it exposes Celery performance metrics in Prometheus format.
2024-05-22 15:50:32 -05:00
d5bfdaca25 v-m/alertmanager-ntfy: Add labels to notifications
Just having the alert name and group name in the ntfy notification is
not enough to really indicate what the problem is, as some alerts can
generate notifications for many reasons.  In the email notifications
AlertManager sends by default, the values (but not the keys) of all
labels are included in the subject, so we will reproduce that here.
2024-05-22 15:20:27 -05:00
aedd4df9f6 sshca: Add machine ID for Toad 2024-05-22 15:20:09 -05:00
d74e26d527 victoria-metrics: Send alerts via ntfy
I don't like having alerts sent by e-mail.  Since I don't get e-mail
notifications on my watch, I often do not see alerts for quite some
time.  They are also much harder to read in an e-mail client (Fastmail
web an K-9 Mail both display them poorly).  I would much rather have
them delivered via _ntfy_, just like all the rest of the ephemeral
notifications I receive.

Fortunately, it is easy enough to integrate Alertmanager and _ntfy_
using the webhook notifier in Alertmanager.  Since _ntfy_ does not
natively support the Alertmanager webhook API, though, a bridge is
necessary to translate from one data format to the other.  There are a
few options for this bridge, but I chose
[alexbakker/alertmanager-ntfy][0] because it looked the most complete
while also having the simplest configuration format.  Sadly, it does not
expose any Prometheus metrics itself, and since it's deployed in the
_victoria-metrics_ namespace, it needs to be explicitly excluded from
the VMAgent scrape configuration.

[0]: https://github.com/alexbakker/alertmanager-ntfy
2024-05-10 10:32:52 -05:00
a4591950ba home-assistant: Add time-to-go timer to watch view
This way I can start the "time to go" timer from my watch as soon as
Brandon says he's leaving work.
2024-05-10 09:24:34 -05:00
ab916640cb home-assistant: Re-enable 17track sensor 2024-05-10 09:24:02 -05:00
7618bdcae6 firefly-iii: Replace importer access token
The access token the Firefly III Importer service uses to communicate
with Firefly III expired and needs replaced.
2024-05-10 09:23:04 -05:00
ebea31fe55 v-m: alerts: Add alert for camera offline 2024-04-23 09:42:04 -05:00
c2417b7960 authelia: Fix Jenkins OIDC client
Authelia 4.38 introduced a change that broke logging in to Jenkins with
OIDC.  This setting is required to fix it.
2024-04-10 21:26:00 -05:00
1581a620ef v-m/scrape: Scrape nvr2.p.b
*nvr2.pyrocufflink.blue* has replaced *nvr1.pyrocufflink.blue* as the
Frigate/recording server.
2024-04-10 21:25:26 -05:00
c2b595d3e2 keyserv: Add age key for nvr2/NUT monitor 2024-04-06 10:06:30 -05:00
31b0b081a3 keyserv: Add key for Frigate/nvr2 2024-04-05 14:12:08 -05:00
3ba83373f3 step-ca: Re-deploy (again) with DCH CA R2
Although most libraries support ED25519 signatures for X.509
certificates, Firefox does not.  This means that any certificate signed
by DCH CA R3 cannot be verified by the browser and thus will always
present a certificate error.

I want to migrate internal services that do not need certificates
that are trusted by default (i.e. they are only accessed programatically
or only I use them in the browser) back to using an internal CA instead
of the public *pyrocufflink.net* wildcard certificate.  For applications
like Frigate and UniFi Network, these need to be signed by a CA that
the browser will trust, so the ED25519 certificate is inappropriate.
Thus, I've decided to migrate back to DCH CA R2, which uses an EdDSA
signature, and can therefore be trusted by Firefox, etc.
2024-04-05 13:03:34 -05:00
5c34fdb1c6 sshca: Add Machine UUID for nvr2.p.b 2024-04-05 12:26:51 -05:00
680709e670 authelia: Add auth rule for HLC forms submit
The *hlcforms* application handles form submissions for the Hatch
Learning Center website.  It has various features for Tabitha that are
only accessible internally, but the form submission handler itself of
course needs to be accessible anonymously.
2024-03-25 08:43:55 -05:00
c7223ff4fd authelia: Enable dark theme
A recent version of *Authelia* added a dark theme.  Setting the `theme`
option to `auto` enables it when the user agent has the "prefers dark
mode" hint enabled.
2024-02-27 06:51:14 -06:00
de72776e73 v-m: Scrape metrics from Authelia
Authelia exposes Prometheus metrics from a different server socket,
which is not enabled by default.
2024-02-27 06:41:52 -06:00
e0b2b3f5ae v-m: Scrape metrics from Patroni
Patroni, a component of the *postgres poerator*, exports metrics about
the PostgreSQL database servers it manages.  Notably, it provides
information about the current transaction log location for each server.
This allows us to monitor and alert on the health of database replicas.
2024-02-24 08:33:52 -06:00
2442835edd autoscaler: Add SealedSecret for AWS key 2024-02-22 09:59:16 -06:00
83eeb46c93 v-m: Scrape Argo CD
*Argo CD* exposes metrics about itself and the applications it manages.
Notibly, this can be useful for monitoring application health.
2024-02-22 07:10:01 -06:00
465f121e61 v-m: Scrape Promtail
The *promtail* job scrapes metrics from all the hosts running Promtail.
The static targets are Fedora CoreOS nodes that are not part of the
Kubernetes cluster.

The relabeling rules ensure that both the static targets and the
targets discovered via the Kubernetes Node API use the FQDN of the host
as the value of the *instance* label.
2024-02-22 07:10:01 -06:00
815eefdcf9 promtail: Deploy as DaemonSet
Running Promtail in a pod controlled by a DaemonSet allows it to access
the Kubernetes API via a ServiceAccount token.  Since it needs the API
in order to discover the Pods running on the current node in order to
find their log files, this makes the authentication process a lot
simpler.
2024-02-22 07:10:01 -06:00
5e4ab1d988 v-m: Update Loki scrape target
Now that Loki uses Caddy as a reverse proxy, we need to update the
scrape target to point to the correct port (443).
2024-02-22 07:10:01 -06:00
f468977d91 grafana: Enable send_user_header option
I discovered today that if anonymous Grafana users have Viewer
permission, they can use the Datasource API to make arbitrary queries
to any backend, even if they cannot access the Explore page directly.
This is documented ([issue #48313][0]) as expected behavior.

I don't really mind giving anonymous access to the Victoria Metrics
datasource, but I definitely don't want anonymous users to be able to
make Loki queries and view log data.  Since Grafana Datasource
Permissions is limited to Grafana Enterprise and not available in
the open source version of Grafana, the official recommendation from
upstream is to use a separate Organization for the Loki datasource.
Unfortunately, this would preclude having dashboards that have graphs
from both data sources.  Although I don't have any of those right now, I
like the idea and may build some eventually.

Fortunately, I discovered the `send_user_header` Grafana configuration
option.  With this enabled, Grafana will send an `X-Grafana-User` header
with the username of the user on whose behalf it is making a request to
the backend.  If the user is not logged in, it does not send the header.
Thus, we can detect the presence of this header on the backend and
refuse to serve query requests if it is missing.

[0]: https://github.com/grafana/grafana/issues/48313
2024-02-22 07:10:01 -06:00
35ff500812 grafana: Configure Loki datastore
Usually, Grafana datastores are configured using its web GUI.  When
setting up a datastore that requires TLS client authentication, the
client certificate and private key have to be pasted into the form.
For certificates that renew frequently, this method would require a
frequent manual effort.  Fortunately, Grafana supports defining
datastores via its "provisioning" mechanism, reading the configuration
from YAML files on the filesystem.
2024-02-22 07:10:01 -06:00
d4efb735bf loki-ca: Add cert-manager issuer for Loki CA
The Loki CA is used to issue client certificates for Grafana Loki.  This
_cert-manager_ ClusterIssuer will allow applications running in
Kubernetes (e.g. Grafana) to request a Certificate that they can use to
access the Loki HTTP API.
2024-02-22 07:10:01 -06:00
d08cc6fb0f step-ca: Redeploy with DCH CA R3
I never ended up using _Step CA_ for anything, since I was initially
focused on the SSH CA feature and I was unhappy with how it worked
(which led me to write _SSHCA_).  I didn't think about it much until I
was working on deploying Grafana Loki.  For that project, I wanted to
use a certificate signed by a private CA instead of the wildcard
certificate for _pyrocufflink.blue_.  So, I created *DCH CA R3* for that
purpose.  Then, for some reason, I used the exact same procedure to
fetch the certificate from Kubernetes as I had set up for the
_pyrocufflink.blue_ wildcard certificate, as used by Frigate.  This of
course defeated the purpose, since I could have just as easily used
the wildcard certificate in that case.

When I discovered that Grafana Loki expects to be deployed behind a
reverse proxy in order to implement access control, I took the
opportunity to reevaluate the certificate issuance process.  Since a
reverse proxy is required to implement the access control I want (anyone
can push logs but only authenticated users can query them), it made
sense to choose one with native support for requesting certificates via
ACME.  This would eliminate the need for `fetchcert` and the
corresponding Kubernetes API token.  Thus, I ended up deciding to
redeploy _Step CA_ with the new _DCH CA R3_ for this purpose.
2024-02-22 07:10:01 -06:00
4c238a69aa v-m: Scrape Grafana Loki
Grafana Loki is hosted on a VM named *loki0.pyrocufflink.blue*.  It runs
Fedora CoreOS, so in addition to scraping Loki itself, we need to scrape
_collectd_ and _Zincati_ as well.
2024-02-21 09:16:26 -06:00
1777262c15 dch-root-ca: Update to DCH Root CA R3
Since I shut down _step-ca_, nothing uses _DCH Root CA R2_ anymore.
I've created a new CA using ED25519 key pairs, named _DCH Root CA R3_.
2024-02-21 09:16:26 -06:00
1d2b5260bb keyserv: Add age key for loki0
This key is used to encrypt the Kubernetes access token for `fetchcert`,
which downloads the certificate for Grafana Loki HTTPS.
2024-02-21 09:16:26 -06:00
96928a2611 kitchen: Fix weather metrics API URI
Apparently, I never bothered to check that the Kitchen HUD server was
actually fetching data from Victoria Metrics when I updated it before; I
only verified that the Unauthorized errors in the `vmselect` log
went away.  They did, but only because now the Kitchen server was
failing to contact `vmselect` at all.
2024-02-21 08:01:35 -06:00
2acefd9a72 v-m: Add alert for sensor battery levels
I did not realize the batteries on the garage door tilt sensors had
died.  Adding alerts for various sensor batteries should help keep me
better informed.
2024-02-16 20:56:38 -06:00
9784b90743 cert-manager: Remove unused secrets
These secrets were used by previous issuers/solvers and are no longer
needed.
2024-02-16 20:56:08 -06:00
0ad63e0613 authelia: Allow anonymous access to AlertManager
Sometimes, I want to be able to look at active alerts without logging
in.  This rule allows read-only access to the AlertManager UI and API.
Unfortunately, the user experience when attempting to create a new
Silence using the UI without first logging in is suboptimal, but I think
that's worth the trade-off.
2024-02-16 20:41:47 -06:00
2f6c358860 invoice-ninja: Update PVC for restored backup
The Longhorn volume for the *invoice-ninja* PVC got into a strange state
following an unexpected shutdown this morning.  One of its replicas
seemed to have disappeared, and it also thought that the size had
changed.  As such, it got stuck in "expanding" state, but it was not
actually being expanded.  This issue is described in detail in the
Longhorn documentation: [Troubleshooting: Unexpected expansion leads to
degradation or attach failure][0].  Unfortunately, there is no way to
recover a volume from that state, and it must be deleted and recreated
from backup.  This changes some of the properties of the PVC, so they
need to be updated in the manifest.

[0]: https://longhorn.io/kb/troubleshooting-unexpected-expansion-leads-to-degradation-or-attach-failure/
2024-02-15 09:45:57 -06:00
80df160ceb device-plugins: Allow FUSE plugin on Jenkins nodes
Jenkins jobs that build container images need access to `/dev/fuse`.
Thus, we have to allow Pods managed by the *fuse-device-plugin*
DaemonSet to be scheduled on nodes that are tainted for use exclusively
by Jenkins jobs.
2024-02-13 07:56:35 -06:00
33fa951c68 Merge remote-tracking branch 'refs/remotes/origin/master' 2024-02-03 09:52:39 -06:00
a395d176bc sshca: Set group principals for Server Admins
Members of the *Server Admins* group need to be able to log in to
machines using their respective privileged accounts for e.g.
provisioning or emergencies.
2024-02-02 21:02:40 -06:00
1f28a623ae v-m: Do not scrape/alert on Graylog
Graylog is down because Elasticsearch corrupted itself again, and this
time, I'm just not going to bother fixing it.  I practically never use
it anymore anyway, and I want to migrate to Grafana Loki, so now seems
like a good time to just get rid of it.
2024-02-01 21:45:43 -06:00
380af211ec authelia: Reduce log level 2024-02-01 21:36:27 -06:00
94300ac502 kitchen: Use SealedSecret template for config
The configuration file for the kitchen HUD server has credentials
embedded in it.  Until I get around to refactoring it to read these from
separate locations, we'll make use of the template feature of
SealedSecrets.  With this feature, fields can refer to the (decrypted)
value of other fields using Go template syntax.  This makes it possible
to have most of the `config.yaml` document unencrypted and easily
modifiable, while still protecting the secrets.
2024-02-01 21:18:46 -06:00
baab02217e authelia: Remove rule for Paperless-ngx API
I don't like the [Paperless Mobile][0] app well enough to remove the MFA
restriction for the Paperless-ngx API.

[0]: https://github.com/astubenbord/paperless-mobile
2024-02-01 21:17:46 -06:00
2cd4a8b097 sshca: Configure user CA
SSHCA now supports issuing user certificates.  It uses OpenID Connect to
authenticate requests, and issues certificates based on the user's ID
token.
2024-02-01 09:02:11 -06:00
834d0f804f v-m: Scrape Grafana
Grafana exports Prometheus metrics about its own performance.
2024-02-01 09:02:01 -06:00
3439ce1f13 grafana: Deploy Grafana
Now that Victoria Metrics is hosted in Kubernetes, it only makes sense
to host Grafana there as well.  I chose to use a single-instance
deployment for simplicity; I don't really need high availability for
Grafana.  Its configuration does not change enough to worry about the
downtime associated with restarting it.  Migrating the existing data
from SQLite to PostgreSQL, while possible, is just not worth the hassle.
2024-01-27 22:01:08 -06:00
4e15a9d71d invoice-ninja: Deploy Invoice Ninja
Invoice Ninja is a small business management tool.  Tabitha wants to
use it for HLC.

I am a bit concerned about the code quality of this application, and
definitely alarmed at the data it send upstream, so I have tried to be
extra careful with it.  All privileges are revoked, including access to
the Internet.
2024-01-27 21:11:26 -06:00
a5d186b461 sshca: Add update-machine-ids script
The `update-machine-ids.sh` shell script helps update the `sshca-data`
SealedSecret with the current contents of the `machine-ids.json` file
(stored locally, not tracked in Git).
2024-01-25 20:42:47 -06:00
8ae8bad112 v-m: Scrape serial1.p.b 2024-01-25 20:42:07 -06:00
7eae328a2c sshca: Add machine ID for serial1.p.b 2024-01-25 20:41:54 -06:00
9fff21aae1 h-a: Remove roomba_is_downstairs template sensor
This sensor is now provided by a [Threshold][0] helper.

[0]: https://www.home-assistant.io/integrations/threshold/
2024-01-25 17:31:36 -06:00
8bb8ed4402 xactfetch: Additional mounts for rbw sync
In order to sync the Bitwarden vault, `rbw` needs its configuration file
in `/etc/rbw` and access to writable ephemeral storage at `/tmp`.
2024-01-24 12:00:13 -06:00
ad37948fe2 v-m: Scrape all metrics components
We are now getting metrics from *vmstorage*, *vminsert*, *vmselect*,
*vmalert*, *alertmanaer*, and *blackbox-exporter*, in addition to
*vmagent*.
2024-01-23 11:51:50 -06:00
bcb588407d v-m: Correct vmalert remote read/write URLs
*vmalert* has been generating alerts and triggering notifications, but
not writing any `ALERTS`/`ALERTS_FOR_STATE` metrics.  It turns out this
is because I had not correctly configured the remote read/write
URLs.
2024-01-23 10:45:40 -06:00
9a76a548ec argocd/app: jenkins: Enable auto sync
We're going to try out automatically synchronizing the Jenkins resources
when changes are pushed to Git.
2024-01-22 18:50:41 -06:00
119a8a74ae v-m: alerts: Enhance Frigate unavailable alert
If Frigate is running but not connected to the MQTT broker, the
`sensor.frigate_status` entity will be available, but the
`update.frigate_server` entity will not.
2024-01-22 18:27:30 -06:00
20ef2a287b jenkins: Update to 2.426.2 2024-01-22 18:01:03 -06:00
150 changed files with 5249 additions and 5886 deletions

View File

@@ -0,0 +1,13 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana
namespace: argocd
spec:
destination:
server: https://kubernetes.default.svc
project: default
source:
path: grafana
repoURL: https://git.pyrocufflink.blue/infra/kubernetes.git
targetRevision: master

View File

@@ -0,0 +1,13 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: invoice-ninja
namespace: argocd
spec:
destination:
server: https://kubernetes.default.svc
project: default
source:
path: invoice-ninja
repoURL: https://git.pyrocufflink.blue/infra/kubernetes.git
targetRevision: master

View File

@@ -11,3 +11,7 @@ spec:
path: jenkins
repoURL: https://git.pyrocufflink.blue/infra/kubernetes.git
targetRevision: master
syncPolicy:
automated:
prune: true
selfHeal: true

View File

@@ -0,0 +1,13 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: step-ca
namespace: argocd
spec:
destination:
server: https://kubernetes.default.svc
project: default
source:
path: step-ca
repoURL: https://git.pyrocufflink.blue/infra/kubernetes.git
targetRevision: master

View File

@@ -66,6 +66,13 @@ spec:
value: /run/authelia/secrets/oidc.hmac_secret
- name: AUTHELIA_IDENTITY_PROVIDERS_OIDC_ISSUER_PRIVATE_KEY_FILE
value: /run/authelia/secrets/oidc.issuer_private_key
ports:
- containerPort: 9091
name: http
protocol: TCP
- containerPort: 9959
name: metrics
protocol: TCP
startupProbe:
httpGet:
port: 9091

View File

@@ -6,10 +6,6 @@ access_control:
- 172.30.0.0/26
- 172.31.1.0/24
rules:
- domain: paperless.pyrocufflink.blue
resources:
- '^/api/'
policy: bypass
- domain: paperless.pyrocufflink.blue
policy: two_factor
subject:
@@ -40,6 +36,20 @@ access_control:
networks:
- internal
policy: bypass
- domain: metrics.pyrocufflink.blue
networks:
- internal
resources:
- '^/alertmanager([/?].*)?$'
methods:
- GET
- HEAD
- OPTIONS
policy: bypass
- domain: hlcforms.pyrocufflink.blue
resources:
- '^/submit/.*'
policy: bypass
authentication_backend:
ldap:
@@ -69,6 +79,7 @@ identity_providers:
- offline_access
authorization_policy: one_factor
pre_configured_consent_duration: 8h
token_endpoint_auth_method: client_secret_post
- id: kubernetes
description: Kubernetes
public: true
@@ -110,9 +121,20 @@ identity_providers:
- email
- groups
- offline_access
- id: sshca
description: SSHCA
public: true
pre_configured_consent_duration: 4h
redirect_uris:
- http://127.0.0.1
scopes:
- openid
- profile
- email
- groups
log:
level: trace
level: info
notifier:
smtp:
@@ -135,8 +157,15 @@ server:
storage:
postgres:
host: default.postgresql
host: postgresql.pyrocufflink.blue
database: authelia
username: authelia.authelia
username: authelia
password: unused
tls:
skip_verify: false
telemetry:
metrics:
enabled: true
theme: auto

View File

@@ -1,25 +1,29 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: authelia
labels:
- pairs:
app.kubernetes.io/instance: authelia
resources:
- ../dch-root-ca
- secrets.yaml
- redis.yaml
- authelia.yaml
- oidc-cluster-admin.yaml
- postgres-cert.yaml
replicas:
- name: authelia
count: 2
configMapGenerator:
- name: authelia
namespace: authelia
files:
- configuration.yml
- name: postgresql-ca
namespace: authelia
files:
- postgresql-ca.crt
patches:
- patch: |-
@@ -34,17 +38,20 @@ patches:
containers:
- name: authelia
env:
- name: AUTHELIA_STORAGE_POSTGRES_PASSWORD_FILE
value: /run/authelia/secrets/postgresql/password
- name: AUTHELIA_STORAGE_POSTGRES_TLS_CERTIFICATE_CHAIN_FILE
value: /run/authelia/certs/postgresql/tls.crt
- name: AUTHELIA_STORAGE_POSTGRES_TLS_PRIVATE_KEY_FILE
value: /run/authelia/certs/postgresql/tls.key
volumeMounts:
- mountPath: /run/authelia/certs
name: postgresql-ca
- mountPath: /run/authelia/secrets/postgresql
name: postgresql-auth
- mountPath: /run/authelia/certs/dch-root-ca.crt
name: dch-root-ca
subPath: dch-root-ca.crt
- mountPath: /run/authelia/certs/postgresql
name: postgresql-cert
volumes:
- name: postgresql-auth
- name: postgresql-cert
secret:
secretName: authelia.authelia.default.credentials.postgresql.acid.zalan.do
- name: postgresql-ca
secretName: postgres-client-cert
- name: dch-root-ca
configMap:
name: postgresql-ca
name: dch-root-ca

View File

@@ -0,0 +1,12 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: postgres-client-cert
spec:
commonName: authelia
privateKey:
algorithm: ECDSA
secretName: postgres-client-cert
issuerRef:
name: postgresql-ca
kind: ClusterIssuer

View File

@@ -3,6 +3,7 @@ kind: Kustomization
resources:
- https://github.com/kubernetes/autoscaler/raw/cluster-autoscaler-release-1.26/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
- secrets.yaml
images:
- name: k8s.gcr.io/autoscaling/cluster-autoscaler

16
autoscaler/secrets.yaml Normal file
View File

@@ -0,0 +1,16 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: autoscaler-aws-keys
namespace: kube-system
spec:
encryptedData:
access_key_id: AgA8WLIutrqtizbW/gRNjUaTV9AebXviLX0ffhvRTKVQnPbmjWYUOEyqI6inXmNmxIE342+U0oDtYs878+yHYIxfRAcUi6FRIKPpbUtICzgaudHnjYZaT8gpp20M/ovijkbHTSZMO7snxB72CAa0uzfs+NIY3ky5kDsBDEQKhUix7kQ5Zn75mIBEZhV/W1n5tPQ80k0rykcGt174VPTOtKWV9pIqVJxkw3xZE4vrxB6Cb5S7FfSft5te2vWld8oKE6wbCgEbRVROQ+Q3NfvY8I1jTzZT1eSIHuYM9OmS8NL1S9DOLl/6Pin4jLoBJEaIHOT5abOQLtvQUSuuOvbdbePHaoABBUG+TSXNZnM9GlF+an461ARxHZi51TtjcAHp9063ClTICiEuNT5VoGyfH6Z67MGVtox5DmOo28mgPE1OXALmC3Z3QV/uSIyulTrkV/VDTvp7au21m1nEd54/pzLXUtn78hwv8rnJ9gJGsgq9ovM42Yyo964zN5oBvZkzkIGLPqjAJUEYtXwvOSUprrmyWnJ7bdFZMvx2yGT0S3Hdwt2o6svuPMhXKVI9Ykd122hA14n1/UpimnBq7nAy3EQmCPTAQOh5ufCjqUG3z722aY1KDPDZA+cL8XfrI7JRae+gH0zrCxjKMCyibdz8MHd0ca2n/t42NVbPO0AptY1OKoDK2byUwuAXZl+e9aE302y5Y4ZNiJu+yhaAHZ1gtiDp07eLKA==
secret_access_key: AgAkFztvEEVWpioxcnNJ7b077AzyJ5IMtgKn0nVa+tMzEYWzuWe45G2MuPwajARj5Ji8WH4gwzcBwJOBfuDMmBz7GeodoZJ2tVcbcNg/5dZp5LA9IU3WqUMGIf0lMMnlOaxIxm1Zy+stJM7lbNabA9Nh+NXq4BpcGj+fUevYodhJpLyP7gqKSLZlvsfXVxX8O9XxADUMb1NrAYBx+0J19lh8WkJe2s9oQzpJND6pj3dUlb8UbBdg6uD4CSlORcSW1WdqQz9WW/clt0eBO1hlgVC6me7GlWtAqm88+1+sBlmT7SrCzbP0Ky7w2xz9L6Y2I9k65c2yCwkPrfh6CiIXltjPZEtvL+gzIIvXNIO1XUX4FlcSu+AartVPyDkAuA0TsMEuaORo0C9HnxSYm4fHRaDe2HZWwXCLXXyW1xZxfy0le1pr9zUNcx5HFjR7XJ6E3seirIyk8B9CnqDY/Ff29PQzDjv2k50UiSXHLIpwbZ5G2nqYzkOG2MRhjggiYKh7VPpKTwQUebVyFsdiLaAFcWr8BrLwXXcbOeEpHRnsZlCCqXM1uN4H3Am0RuRc12V2pYWHP/q53sSfYYBDsXFHOXr6e3iZ/c95GI/ndjaBqk1EtV7go4wn5sZaZvDmQktYalNKYk4EZLzAsgj7PdOeS5SDa2ZnQud4Om7a2MRoayntg8pyCeLfvV6G5CwuUh/kFZVn+2v2OTabC+6HMde4Yq1MMrFD+qOKGywHMG8HvZieHCzi4ZnnT3Wt
template:
metadata:
creationTimestamp: null
name: autoscaler-aws-keys
namespace: kube-system

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: dch-ca
spec:
acme:
server: https://ca.pyrocufflink.blue:32599/acme/acme/directory
email: cert-manager@pyrocufflink.net
privateKeySecretRef:
name: dch-ca-acme
caBundle:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJ4RENDQVdxZ0F3SUJBZ0lVYkh6MnRzc2EwOXpzSGsrRWRHRDNRS3ByTUtRd0NnWUlLb1pJemowRUF3UXcKUURFTE1Ba0dBMVVFQmhNQ1ZWTXhHREFXQmdOVkJBb01EMFIxYzNScGJpQkRMaUJJWVhSamFERVhNQlVHQTFVRQpBd3dPUkVOSUlGSnZiM1FnUTBFZ1VqSXdIaGNOTWpNd09USTBNakExTXpBNVdoY05ORE13T1RFNU1qQTFNekE1CldqQkFNUXN3Q1FZRFZRUUdFd0pWVXpFWU1CWUdBMVVFQ2d3UFJIVnpkR2x1SUVNdUlFaGhkR05vTVJjd0ZRWUQKVlFRRERBNUVRMGdnVW05dmRDQkRRU0JTTWpCWk1CTUdCeXFHU000OUFnRUdDQ3FHU000OUF3RUhBMElBQkUyRApOSkhSY2p1QTE5Wm9wckJLYXhJZlV4QWJ6NkxpZ003ZGd0TzYraXNhTWx4UkFWSm1zSVRBRElFLzIyUnJVRGdECk9ma3QyaVpUVWpNcnozQXhYaFdqUWpCQU1CMEdBMVVkRGdRV0JCVE0rZDhrYjFrb0dtS1J0SnM0Z045ellhKzYKb1RBU0JnTlZIUk1CQWY4RUNEQUdBUUgvQWdFQk1Bc0dBMVVkRHdRRUF3SUJCakFLQmdncWhrak9QUVFEQkFOSQpBREJGQWlFQTJLYThtTWlBRkxtckZXdDBkQW1sMjQ3cmUyK2k0VVBoeUhjT0JmTksrZ29DSUh2K3ZFdzdDSFpRCmlySWE2OTduZmU0S2lYSU13SGxBTVMxKzFRWm9oRkRDCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
solvers:
- http01:
ingress:
ingressClassName: nginx

View File

@@ -2,19 +2,13 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- cert-manager.yaml
- https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml
- cluster-issuer.yaml
- certificates.yaml
- cert-exporter.yaml
- dch-ca-issuer.yaml
secretGenerator:
- name: cert-manager-tsig
namespace: cert-manager
files:
- cert-manager.key
options:
disableNameSuffixHash: true
- name: zerossl-eab
namespace: cert-manager
envs:
@@ -28,16 +22,24 @@ secretGenerator:
- cert-exporter.pem
- ssh_known_hosts
- name: acme-dns
namespace: cert-manager
files:
- acme-dns.json
options:
disableNameSuffixHash: true
- name: cloudflare
namespace: cert-manager
files:
- cloudflare.api-token
options:
disableNameSuffixHash: true
patches:
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: cert-manager
namespace: cert-manager
spec:
template:
spec:
dnsConfig:
nameservers:
- 172.30.0.1
dnsPolicy: None

View File

@@ -0,0 +1,10 @@
LoadPlugin df
<Plugin df>
ReportByDevice true
FSType autofs
FSType overlay
FSType efivarfs
IgnoreSelected true
</Plugin>

View File

@@ -0,0 +1,8 @@
LoadPlugin logfile
<Plugin logfile>
LogLevel info
File stderr
Timestamp false
PrintSeverity true
</Plugin>

View File

@@ -0,0 +1,9 @@
LoadPlugin chrony
LoadPlugin cpufreq
LoadPlugin disk
LoadPlugin entropy
LoadPlugin processes
LoadPlugin swap
LoadPlugin tcpconns
LoadPlugin thermal
LoadPlugin uptime

View File

@@ -0,0 +1,5 @@
LoadPlugin write_prometheus
<Plugin write_prometheus>
Port 9103
</Plugin>

74
collectd/collectd.yaml Normal file
View File

@@ -0,0 +1,74 @@
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: collectd
labels:
app.kubernetes.io/name: collectd
app.kubernetes.io/component: collectd
spec:
selector:
matchLabels:
app.kubernetes.io/name: collectd
app.kubernetes.io/component: collectd
template:
metadata:
labels:
app.kubernetes.io/name: collectd
app.kubernetes.io/component: collectd
spec:
containers:
- name: collectd
image: git.pyrocufflink.net/containerimages/collectd
ports:
- containerPort: 9103
name: http
readinessProbe: &probe
httpGet:
port: http
path: /metrics
periodSeconds: 60
startupProbe:
<<: *probe
periodSeconds: 1
successThreshold: 1
failureThreshold: 30
timeoutSeconds: 1
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
drop:
- ALL
seLinuxOptions:
type: spc_t
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/collectd.d
name: config
readOnly: true
- mountPath: /host
name: host
- mountPath: /run
name: host
subPath: run
- mountPath: /tmp
name: tmp
hostNetwork: true
hostPID: true
hostIPC: true
tolerations:
- effect: NoExecute
operator: Exists
- effect: NoSchedule
operator: Exists
volumes:
- name: config
configMap:
name: collectd
- name: host
hostPath:
path: /
- name: tmp
emptyDir:
medium: Memory

View File

@@ -0,0 +1,34 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: collectd
labels:
- pairs:
app.kubernetes.io/instance: collectd
app.kubernetes.io/part-of: collectd
includeSelectors: false
resources:
- namespace.yaml
- collectd.yaml
configMapGenerator:
- name: collectd
files:
- collectd.d/df.conf
- collectd.d/log.conf
- collectd.d/plugins.conf
- collectd.d/prometheus.conf
patches:
- patch: |-
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: collectd
spec:
template:
spec:
nodeSelector:
du5t1n.me/collectd: 'true'

6
collectd/namespace.yaml Normal file
View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: collectd
labels:
app.kubernetes.io/name: collectd

View File

@@ -42,7 +42,7 @@ spec:
spec:
containers:
- name: dch-webhooks
image: git.pyrocufflink.net/infra/dch-webhooks
image: git.pyrocufflink.net/containerimages/dch-webhooks
env:
- name: UVICORN_HOST
value: 0.0.0.0
@@ -76,6 +76,8 @@ spec:
name: firefly-token
- mountPath: /run/secrets/du5t1n.me/paperless
name: paperless-token
- mountPath: /run/secrets/du5t1n.me/step-ca
name: step-ca-password
- mountPath: /tmp
name: tmp
subPath: tmp
@@ -93,6 +95,10 @@ spec:
- name: root-ca
configMap:
name: dch-root-ca
- name: step-ca-password
secret:
secretName: step-ca-password
optional: true
- name: tmp
emptyDir:
medium: Memory

View File

@@ -5,9 +5,21 @@ resources:
- ../dch-root-ca
- dch-webhooks.yaml
- ingress.yaml
- secrets.yaml
configMapGenerator:
- name: dch-webhooks
envs:
- dch-webhooks.env
secretGenerator:
- name: firefly-token
files:
- firefly.token
- name: paperless-token
files:
- paperless.token
- name: step-ca-password
files:
- provisioner.password

View File

@@ -1,28 +0,0 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: firefly-token
namespace: default
spec:
encryptedData:
firefly.token: AgB8sD5GEwGMiCUQEioisw+ni5DqJM3L7AEwMnrdlrK7gq2TL/Htqo2TzuUIMxovf9KPkUWD4Y7KpT52z88yysbHlcEQkjB6djJYYecMyNVvPVYBEbKp+zLlJGqPTVoCrcvxaNAOTgOEnUx0zhWy90qWmmWQ4JPEfu1pm7++ZWJhp9c8p2HYsiQ81WV23IIp3Ua+7FK45+QxL1D1W1OXjqYs8BhXDgMS3bc5YGNTEx+e+fKN/i+KlKuO0h6ONt9vp8+DS3sQuiW+pkB+1Ra4bZ3RWZoVyqaoPdQZ95DCOTT6r8dDZObS6rxO3J1Bmppm1xB6zpN5HVFJnE0NpX0ljLSSk59BDZ3LLVawm9QAgwLKVLuyJ9hdlUDJD+EH2gj8ioxiiUAWHqThP3xBjoKD7nZWgyGM0zdyZuX+mADAflyAwcOydRwIXA6s5Tjw5RM0RWQ5JrOJGDAO41LOHoRu6qlFMmwc4novu4tzvlRvakqND4BxCEhDk6v4P1pDQh2xP2fvD3HOU5rqlq3z1X5lPnKHkRJUi2v0pryGBhoLrt8htCYA+w0LjMTx3c5dVC7Fz41U/7vVWhoQcPLNGx9I4Npb9z1qs8ozQCJUCdbPOaEnoKJtMVr2ByoR3WLr4hMT4b1Z2gFfU1rwhz3TO2xTCbY956GXaj1QACk67uHxvzLFaW0Td6vNHJGxQcgbtPvdtzS6yuNlF6W6Y3SlmADjvq5YeIF8n6UviIqqihYf4+os1r3ea14T7Jsi5VCNO1GyXLAke9oQruXe2sXAkTg6xbu+bgUqxDUqjkF677fVRlBTWcpf0gFsqGpHSkVK782I4Z+OUQMceE3+yJzkOD3DzZkGihQUuj82pctY7Ik2nDfc90JE4WMF9gmK1MxnS67yqcvHjKm7yuGNoP8JvdClmQcipJ3LsVdXm4s4dpmWCHDoqkMbN8+KDnh2TjTDwndcsCkCe5cWrKZfwMn9zQyPHH+/BWT77KwNviO6MjLDUzP1CWWAXmizxZozzIYSq67IUggAifv0n2oaBMyN7CsRZaL5KKMdLBhmE4EcEZvxDnRaxII2oWw5B2uyfyD2YWy35K0WRxHQH+TUmZEvPG3e2EWQcj8NR6L6J2wB4h4y2EJXRhrdbgZBppMQ0j22QZsw+0Yiol3uR9oLYlVXXgpkxuDTPUkm9k3GPItJmE75RmInI6xBKUMkueZQBYfSmoYLhsfwliMQ0rPnuY7kEIDJTS5NPrJqdkSZY5chkl9pSDeearJQLaFyB6PXZtBWz+6CqqKZFTO/YOPgU/9zqeUEWoyPUZ1FAVJ3aMrirscHz37jqTgQbn6hMW+WCw2WiApoMG9D+xU5ZfQRIph+Y3S1QjGdfxH7oE7XIuLdy6bQ7Cw93MZVn9kaIvLzcZTHtptOxy0XpygfalC/SDcubBpGCCKDw2I5xpkEC165ZKKM/NN3mxXZKeSY+8nePWl+pZeU8etiulGtQ/ZIqOmGaU7W4T5gJCij1bpGKS28lNhv25lUxMFBvyheLn6qUIMIRdCsvr6OdF+YOQQLLzwDquDD9ankHGzKMI73OWDCFrhXl85QqPg62bVCdghEMlohY4Wniy5k2CRLM0pQm9OZsp+RlPvv/Ea9JxyjQGtv4EcCXG2nv5ewWysv8mIYd8qWH/ZSh2o44pixC0levmrOXMb6cC5T3TC9OI6w1DglF84k3+bVGn+wNAHbo5ALxAhdGmB/GTdSRS1x368o4k2E0MThsezymlfjo8K8wyFWMpHpat/eF7WQy2ibl/lt6sCk8iBFEb4LPVbb68w4gZhPHHqcfZ//9+0rC28eagpYfx6LQi8gDUC/Uw06iPygJBOKABdppLNHqNa6zzOET2yECRlAtuxCmO9WYEpi0cOzg8fvdSzP57f47MJu3vnztFo1AIx5YU0Ohlv9KtjNxTaXziAJY47Y8o9jEnT7ajb81pwLf1/iMHT73TZtu8tVWKKZ7VReEQUzeKfERoboZO/P8TrVOvkK3ZLw5+tA
template:
metadata:
name: firefly-token
namespace: default
type: Opaque
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: paperless-token
namespace: default
spec:
encryptedData:
paperless.token: AgBV2pcAAh+4bbMiNbKqRgFCNOpuZ19H6+YDSbM56pqyV3NRUZTiaRdPReG9T0WxRcYzjHluX7fIrr1kq8uIaCBDhnckdEjITwYqYRDtl6GDXhEXgfXt2HIKkBpBX+rK6j/qwlXpqwtx2oIq30G20WSll7wPsexQ/c0ZvquZysQ1/r7NvynbJB1/1lNk6wnEkRnByCWJI//0nQjYAeUeliftqU2e+PkpETGRyrUlaPlRt4NBaFbJDgZSBh1qlHqE4t0Lf23VdE0VVeFDaDR9EjVO4DPiwC7BEAfDi65+DM6iXUygAfyYL9KsCQGbUdxdb7SAp/ROCVUuu+dLGh1upk42J3XGa1rufN2XtXLCRL8MQ1j4JeV7Jm3yewNt4WYP6tD8UYKBxhRUK5pYU12jBR8yWZ1BBWNvWRN1w++pklMF72N95R61qJQhlftbq8F4yHj4Vh/9n+usJ1zw8LaZg179ZucIV9byA3NrDnbvDWLCvs/sVycrXbcnPec6+oMrgJL6lp96ofjBxqWDCDp/SUBUkDC34jiiaxjzrY5q9hUyf5gdqbzKN5Jda2lHvj6UgJj6Qo8AdQmMF6MNH4X2A1Ni2mR/WTwNzXDGfHibLeaTuBSyvALFoIbuuR78Wkjz76ZC6SQT8HwwCeylPskd7KPJURpJfXfdB/UwyV02LveZpsASgQo/m22znCZVwVhrOlC8SvNht4WO5HgHBf+21cSHfwZ03NM/81fedfxyySvMoMjpGy+89hwfnA==
template:
metadata:
name: paperless-token
namespace: default
type: Opaque

View File

@@ -27,6 +27,7 @@ spec:
tolerations:
- key: du5t1n.me/machine
value: raspberrypi
- key: du5t1n.me/jenkins
volumes:
- name: device-plugin
hostPath:

View File

@@ -7,10 +7,13 @@ TZ=America/Chicago
TRUSTED_PROXIES=172.30.0.160/28
DB_CONNECTION=pgsql
DB_HOST=default.postgresql
DB_HOST=postgresql.pyrocufflink.blue
DB_PORT=5432
DB_USERNAME=firefly-iii.firefly
DB_USERNAME=firefly
DB_DATABASE=firefly
PGSSLROOTCERT=/run/dch-ca/dch-root-ca.crt
PGSSLCERT=/run/secrets/firefly/postgresql/tls.crt
PGSSLKEY=/run/secrets/firefly/postgresql/tls.key
CACHE_DRIVER=redis
SESSION_DRIVER=redis

View File

@@ -73,8 +73,6 @@ spec:
env:
- name: APP_KEY_FILE
value: /run/secrets/firefly-iii/app.key
- name: DB_PASSWORD_FILE
value: /run/secrets/firefly-iii/db.password
- name: STATIC_CRON_TOKEN_FILE
value: /run/secrets/firefly-iii/cron.token
ports:

View File

@@ -9,11 +9,13 @@ namespace: firefly-iii
resources:
- secrets.yaml
- postgres-cert.yaml
- redis.yaml
- firefly-iii.yaml
- ingress.yaml
- importer.yaml
- importer-ingress.yaml
- ../dch-root-ca
configMapGenerator:
- name: firefly-iii
@@ -26,9 +28,6 @@ configMapGenerator:
- firefly-iii-importer.env
patches:
# This patch changes the source secret for the PostgreSQL database
# password from the default (`db.password` inside `firefly-iii`) to
# a secret managed by the postgres operator.
- patch: |-
apiVersion: apps/v1
kind: Deployment
@@ -39,15 +38,18 @@ patches:
spec:
containers:
- name: firefly-iii
env:
- name: DB_PASSWORD_FILE
value: /run/secrets/postgresql/password
volumeMounts:
- name: db-secret
mountPath: /run/secrets/postgresql
- mountPath: /run/dch-ca
name: dch-root-ca
readOnly: true
- mountPath: /run/secrets/firefly/postgresql
name: postgresql-cert
readOnly: true
volumes:
- name: db-secret
- name: dch-root-ca
configMap:
name: dch-root-ca
- name: postgresql-cert
secret:
secretName: firefly-iii.firefly.default.credentials.postgresql.acid.zalan.do
defaultMode: 0440
secretName: postgres-client-cert
defaultMode: 0640

View File

@@ -0,0 +1,13 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: postgres-client-cert
spec:
commonName: firefly
privateKey:
algorithm: ECDSA
secretName: postgres-client-cert
issuerRef:
name: postgresql-ca
kind: ClusterIssuer

View File

@@ -1,22 +1,3 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis
namespace: firefly-iii
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/component: redis
app.kubernetes.io/instance: firefly-iii
app.kubernetes.io/part-of: firefly-iii
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
@@ -75,7 +56,7 @@ spec:
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: redisdata
- name: data
mountPath: /data
subPath: data
- name: tmp
@@ -83,9 +64,21 @@ spec:
securityContext:
fsGroup: 1000
volumes:
- name: redisdata
persistentVolumeClaim:
claimName: redis
- name: tmp
emptyDir:
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/component: redis
app.kubernetes.io/part-of: firefly-iii
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G

View File

@@ -21,7 +21,7 @@ metadata:
namespace: firefly-iii
spec:
encryptedData:
dustin.access-token: AgBqtl9wO0Xb2fbyBm7SJanNvCy1bpJyE83nZQpNIpOoNLkBmi3lkBHYRiEpF71lhcd24cdv2f8BWfjoxXe31smzzAoHHGR7vfPyjI2ufXHs5R5lHu/bmC/8Xbp6XaKHV7KhqdsIuPkbZmZGdRccoQAwUWQzjMqVgu7s9pDDKl+XV0bBgFs+LejF0e+PEEyXCSaF8nWy34MWKGW3SgsXlk4QPqJ426DA1TRwsEVsIWBGeqPAAXorDPk4FDmmpELg/jHbrISHSjiFneL3E9bogoPgPBX51XUjU6dupq2XJ1pK70SFMT/AnqgUtGYRyDpJCLe6yEp/IPAXHBgwkWNt+qT+LagY1/3Y+2lvct47N/+jWuqw0aPbpciZjswiO8Q7zGJsGTYKrf1NWNwuruYb4kyNbRPJclnQN+QsQEfVYHugtDClDxbOAj1zJM9kG6t9H5mwAr9lsCrs1Oqc6xFLMMmzjWnOaauwAepVVseJCTz1fkS/VKMDW6WRu1H6DUbmBqaHpA6mgL+CDg2xFeZrqdkYKPKWPjo+y1KDfHDiwxqJ63NDdqQvBFrJg0UrRAetAbCeNlCgZJwWmgTh149MJrxGGb4pgxC7rd+AC0qLs9druzyLbHTJkn0JIySy9NuRNGJmrr3WBOUteOT8el+yEg2X37k6Eif7ABBrnibtdUXd+feaVp9pkMIxBM8fyrneNAyX6cpjQ9cwKNEq85VWfu6569x6ZhJAr1lOXUWGc12mdg7ELWoTBkrt0dCjlLzOO+NvP4wOn3Nk0nszs0lP+xpD2etjfVLpIIhg2p/4nutxCU/ZV+JMIqzDOyFH/gJH3k1QW0VgbseLSmE2tQE33ImFCDc2/7NgkHltMl2FYSglVWr9R5s0nlz3u1/wrGHoF2tok5v/aE1ZYPZh4Gcr9KBzxx5uGdy/aUFTntYXLTJ4i2rMRzwKS7QXMycnsD9huHU2nwNDGWW1Hz66Aj0vysCRIZ4vSYPpMZ+Wu/Zxmkd8KoLE8yJ2Ii/0P6B/VvqFcLBokvG59iPjyPH/RVrDwn4CXelpYT1ojA8MFer0t9Gz5htZsgVVgcDQT4FLccjkFPbiyUou0O2cz3xUIUJrIC4YO6Iu57F1F8AzxxMrsS20VJbD8PkgATuMZos755Ze3k8J7nAXQKlBF50EQ65TYwnvyk+GK6yUtbdCn6Y/1aLYWj3CAROg60yokqiOPVT1gn113FmUvmPCWsKVpAjBvc1vJ8BQChCSYXJQaib75z+/zxN4+Celqxls4zLGJDUMNaXjI1Vf3J9vcGLwUUN1ZjofwJzbx3f3l7VqN3HSPw76jq6XNJbWIdxD0Q+KRjwyZf/uAoWDZULuFOZctOvCxIXCvbUX/6IdJNjIvENuvFY6mE9uyVaDWQGLkDIxGk40Cjyyjvwer96LDod70kg6Rh9vlWTl06UFFm1S6QxWbHB6tsU1SAooihiEeSp1QGyRI2YVRDJvNXoNd0Fbnw4xPI2tQHW++GJpdzeoBuHoDo9a6sDN+WBorQQdNukAJkVlhvprYH5qeLN1ealaDehPv0baECHGKp92kSRpgT9lfoztkOsICruT+b6iDpNU8HejkRH8iB+OZJEADdCDdxX17HKxXi4Sd9c1F5/s9VtSSC3lH11V9mSlnSlgEu6omgnXs1VsmSy4+nvSUSECMFdYK4rgDlyqilyRFKmt6n/g3VchjvFmuWkHTzV1itrAL/51OHwcK79prQVeVD8r3M6U5ap2+hKEdo3blayP9wm/4eeJn2O2S/E0uVKqKWCWpYlQw4TYjO7owAVWuAtaDRn48ZrBqnnvGjn1unlb6OUDTjRmxM9PCWUGSK/T0ouEzErPg9vjYhrVPf3eaJRQ5OrhKZ2YMfYvSUXBGo7fKbegzTzqdCXWQ/a0WiHCxmC4ua5g+h03mtNFU9bu8anSa3p04a1cqZbXZ1s4dMpQStGaLc6p3n3ZtEuleJG7oYhdn9Ys8Ukw1ScQTZ14bjzTm5rZLEMJvdZRPQ==
dustin.access-token: AgAEv1RHTUGZBoxDa4nMOZ+gU9sW/SjTdaQ5NAqoFuVOYwlrXMLKubonXduiLXp2YSduuRCsF/X8GH8xLjsegf+zcDZcWPUjUq6Hm7q2KDPmy+Ekjv5Z3IOmBOtQLcPZlJGOeJenHhNu+UyA1G9prBEiXj9PnfMh/RrT6nGU4pCxw3406p4YCvhwh00DhNYYQu8VaFejxkWB9RRQ/sQ54708VxCd9myxKfS5oSbi0+3z20cTfk5mGZs6bM+dbvL994cAUIGViNpnqiT1HFvWwvI1ItRFxhp6/CjLfZh9CRKsz6JnaA1JV8+mU6903yNAU8HjTIJlJNL3+vW9lRwUSCnd1Bghfz+iRpyuV+jaCZD76FrOKTlOr4Eo3M6U+HgSx+1ivamnwDAp0K/EpK3BjW2P476NqCDc10uxmN/gdxsSHDtL2XP91t94ApXQ9xq5/3a6lAOldqYJodg2/EKvwpEjsFlfU1/JgUPyZ6qryDQQpY8o2d0f9GOqVINEjH0Lw7zW4GxutWipw3zKbmN+6OoJyhF4FDRNXDCkI8Q4TVEN05nzSipmWzVmgyeSPLwRW6IJ/uzTGDHVYWGMIXfag9zfDP9X6t4j+81n2MRcJoLPjHgkbsJvo9+yEPnHkwp7WbkBMlEwsDVVSkRDv3bo7BSzOxNqVR7MWlfadbAHkX7HAb7Evj6i1Aq/qLtIp6ubdeYlTgQ/Xjs2k0WjfIIXQAsU4WvelRKqoVJKhTkRDo3SFuRqVYRCQQkPVIZhmmcdzXhemUBpFiRjqLV8IaXOXSXR0jOTp+DDHj7vonwygnMaTRkTwUH5yZw1X74vrZf01Yl6vC+ih6iZk1bwQiPKSfZS2XUZhO9df/TDleBHB1rucLo5dWm9GUIg/GqOc5hcbEmE+0zEA9tdXI5eYTPsKfPLBJic+ej/9A+Qx6aIpylFWVwcYS56Ks/RejHCnA5vq7pE4N8SsOLbcxkvETSEHn3xi1p5YMDF9IeMw2gqGzVT8WZzdhD5MxV4jRvk1LnlRli8SN+G6JEifc219c030YVDuGIU4wO3cmjUoD6QXAK8SIUrjsUbci1T5TEbNjcJtaDxwBHKUvFNaDKvDdKTOYbvRjgQaAmFx0TBu15SPLugrHdD7nYsGwKMUusIRT8K9RxTMuvqwzS0vvn0GBmlrJsny5LlaDuknh2+3KpPUe/P+ZNmnsCG0l48Bw87jkxHeSWzGPMDiFqwpuYA8aDkxW2GFehQEIXefzmz6JOBdlvWsh/BxcYsO1Fch9M0jO1EVS3wDJkbseUs9uIzl6Xs1wbvgrIzDe1qKWdLTt5hLexcsYAcsNDygV4IOpJX+D+yqsRY1BKKbKyBUhEfe7dtbyljM5skfEVjDRpmcPyjoer2/rTVf/Z+DLXgL7kYi0hjrAjeVMaeHx3HJcEYmuVuDsilmjcXeArNB/mvL9wbq8FHWiiGpjNKFlHXUaQFfejGJlIwDT5Zb4GuEpLLYJNt0fUi3zBHtq3/YRk560r0Rw4NjjsBfiUddoY2HbRR7miQub6FQ6NJqTZdezvHn2AX3ggb58OpQZw+qPuL4+/QBCDmIV5p7W1FbdaGnb5+5rEva5qidAErvWfWqaJgtCqbHSCgtF3zbEJFppaPS/ukluEjaXfx24d4NkxVilFWlyaMcTdP6OwLrfnZhf1unmv3QqeNHvcp/bNbVwQqQGLDffCMK5j1X7k4m3mchm09C6C7ZUr7p851y7nouNbWxlEI1DCJ0tPARj8KPvYs/j8nr7Hj4KZO4aCQRM8xbWaGO9hiZNm8IAF5L20T24Icv1kWyDAQC2qretr9rzXdNnQtdbj7UJ+U4MlDffUBpPG9m/plRlyeRK3zR91yaJVxU8RpGrE2pn+h2zszMCbhqSMQuD0hFR7W5LYD4bJniVNaU9WempvfMJHicW7lpX0z38I/zA7eYf1ouOmSNDvS/2hPUAEGZGPuRlDQgc1XIVhFT2N2BvWMbA8pMazpWPXzMvjCwLrSmmfuUlApxA==
tabitha.access-token: AgAvnbZFQl98pnAdjAQMRBrUl54L4hE8meGr4lOP0Ah/O3/xyYi9gHJTOmCibxZH/OGo90KFcOjHplosAAVIvaU5Byp7EkxkWySG+XWu5eEvijxsoEXkmuD5ET9BK5Z5rPzCLG+Dodp7VfwuKETk9te++1UGcfG6rAy5wyqnPSC9mns2xhlb0GLvq1QQdMfrQbEFiOtX5jRcN2Rq57nERlDrpyXkkmpQHh8Qn68qH/Cn2zy6GP6wAxIMEOI4TqZ0Ct0UB+p4Vm0ZYOq4A4ruZTSc61PUfD6BfMH7MswO7dArkfKr0b4s8/rPx1cuJcNVE5ZK1JoiYtAY9+36L5aqYjdNWEWj6b5fmG2QjoAEZ+nynLaYyipFlkkPAjcBMifXe5hK3r7urdPYtBGv/rpHC20dTnNQQqonGdJHkYpXN3rqPImc7XBZWjDUzP2wptzV3PigFfuQdcM+JNUAPLHXK6H1CTNGLNd4pyxXkZc33nvCtUICANtDzbNDqBrzAdrMmnBiySlhQuig/iVgql6/2HFKlo5Uf77Kwhu/V4opkVVfbKpfrLQeZaY+UaQi9N0IyhC0VMgzQ3Lr8P7nEYYc4zfrQlyZGlqW9qLt86Jtj359yZk3L0eGzkq/zKVgw4sOSTt+wmR4ZTBo4OVJelolx9ctPC6MWbW7HCBQhViGNBDb0Sh7OzWy13D6xy8+5t+85XiaW6fGstque62Bteo1nywds7WnPXutyPyteCvQx5d5XGKdurjSzvm37ho3ianbDwpyC6zOVnba7mXbwYtdogevTO8TPjyj+Dm30I9ac4MzStLkziC0ZqKViwadhQZ+rNXwiwMdhbVUmAVOs+XsodTpfLTOKT3wJK4hZ5lHIX8GFxTsmChr6N7+lE4O6/BRczEdFOVKqeErGDVSj/pPnx9DVBUnLLnsXL4jPFEMZJmUht19wAFuH15VQTTSYDb/GL7Bq/ECwniqwkD+jd/fyMTLQSaxrs403b+bHpxAja687632Tvj9Ob2jsolSIWR7gYhqGh3PDqhS1yHU0DiA12t04AieW/NENd2KRnHIRI3eaZow6wzZRx28yeCO0ZqCaEFCZbtKjtvw9D7weist+UnX9MQFC+gbS0yu3wjrW61WpY04Ujsxwh4nKlbCVyhxMvXdx2xrcPkzgLi3ZumAIp028JteDHZBiVcGL4riVlM9VYp5JyL70G5ueUR1H18namVolyALkrM+dsanKdV7LRXc1fK0OODl0nMAGTV00koYFbkeIgVkObgmg5RNnxiE65f73SntI4PjJOem5E4VyBhIb5PFM7Ixxp/BOHI0dr1zITjNC8DyvQ37SYcjYwqCKS6rufBhQQUAq+xlwsX8zXAdPsu8W39+ei4EoFAdV9QpLH4zFvUdD9noimW+s9H3y+JQcJ070LzzvE6snHJdHCHvONuuQ0XFRjEf7Xf2ISZA6dt7i6J/040VTOcrf3JVpcxYdjPRhZZsM6Loti9tNVHWx1UzNZq6NrhnuFrNiYrWyf0wKaaMALwYT6e1KDOhgg0wWR5l18ia8GmtIZ78GQHRojlBWV+blpAM/cS5NHtgL3cRm+9Ep9/KGT2izxJ0gTyXH/DOIbA+NMM4wJT8SWweVbELvyey8br34oIbpv/gOX7C8Qh1h8IOuMPowsqt3IPPjPXyWp9bNLvtXlnFh95VptKW9cm5IR90ATFpzVE8CB04NMu2CYkxtbAuRLPZZWHwN39IeUluRQIEPqJEVhjWthyApJovfuagjcWMRVPbMJddRx+ubYwV1ikjwl8dH2ZT98bcJDN/6mbh3AimpIR2CKI43kNCHuVqLc6PGgwYG+d8w5CWfXk/2eFrCGhC9rWLjvEUiyb6DOM/R1kJt2eunlFr1EyxlvfJ33cdN3K6uQBpXZ6f73YnWXdkEQ2G20TFvizY2payccxo8GuxkSRSiWTlEM+zOZPm8ayF1Z8DKWKiRxNdZHxO0O8eNXR7+QfNMSerCpFb9abcfC/kP6Du9CgB4Q==
autoimport.secret: AgAUiScErUsHx0VMhOPaN+onfVz9cm1l00x06713HK4UT/h6Ih/4UcATvXayOsKSVTEzzucNkIaGIgrSG/7RWpo1ZMgqkyjmQI9URUE07yVnckZWWt+JqGTmCS7qp2KLD3eC+VAHuz1/3O3xv5fSW0G1zVJ4pJzaOjyAtWYK59qjL0Mjmcx86Vx6FamNgtcibX5kxO06G2ENeHkYLODeNbdCOwc1p7Uoet9E7zZao958/griN7sx7EmruTu1TLv8UbyJP4/gPlKingX8U6B6QRWeI0L4FkTamrtD3AiTTJnbZ5Gl+o3zbrGc7yxA1gPWqVfi12qwjESQprUQxMVpp6GGtBtCjXNX5Ne0f4y79wP+YRpT2jUdUxi6qdKcw4v018CrEvobSLigBkEYLCVMAmvL0wiZlFosp3MfOd33KBtCQrhoyhJCbJmcS0mEqW5KO66T0Ajqtsc71hGS9LqS5X9mKZHvMLHAM28B4E2MfNnJxABOCBC3Vu+j6nku3qtYkCZl1uk2wF2V5srl8wTuX7a86vDsVJGjBwMT8wXquoIvln+ywkxqAGR0smRYp5xcOZaJ2UfXpodY6+97Quuv9lv4lEwkqzTvieoH3Blw2rV6/Eqjj+1DV+eZX7O3VakDMDV1IWadvRmJjaUmD6z4EChNgNTcOXfAgOpmBa+5uEUH113vZDEM9QWrnz6fDl0kMf6AWDg4jpv9J7qurG927e3iZPXZszYS4CY9ZbMuFNHXsA==
template:

78
fleetlock/fleetlock.yaml Normal file
View File

@@ -0,0 +1,78 @@
apiVersion: v1
kind: Service
metadata:
name: fleetlock
labels:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock
app.kubernetes.io/part-of: fleetlock
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock
ports:
- name: http
port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fleetlock
labels:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock
app.kubernetes.io/part-of: fleetlock
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock
template:
metadata:
labels:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock
app.kubernetes.io/part-of: fleetlock
spec:
serviceAccountName: fleetlock
containers:
- name: fleetlock
image: quay.io/poseidon/fleetlock:v0.4.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 8080
readinessProbe: &probe
httpGet:
port: 8080
path: /-/healthy
periodSeconds: 60
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
startupProbe:
<<: *probe
periodSeconds: 1
timeoutSeconds: 1
failureThreshold: 30
resources:
requests:
cpu: 30m
memory: 30Mi
limits:
cpu: 50m
memory: 50Mi
securityContext:
readOnlyRootFilesystem: true
securityContext:
runAsUser: 842
runAsGroup: 842
runAsNonRoot: true

View File

@@ -0,0 +1,21 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: fleetlock
labels:
- pairs:
app.kubernetes.io/instance: fleetlock
resources:
- rbac.yaml
- fleetlock.yaml
patches:
- patch: |
apiVersion: v1
kind: Service
metadata:
name: fleetlock
spec:
clusterIP: 10.96.1.15

7
fleetlock/namespace.yaml Normal file
View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: Namespace
metadata:
name: fleetlock
labels:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock

92
fleetlock/rbac.yaml Normal file
View File

@@ -0,0 +1,92 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: fleetlock
labels:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock
app.kubernetes.io/part-of: fleetlock
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fleetlock
labels:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock
app.kubernetes.io/part-of: fleetlock
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- patch
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- apiGroups:
- ""
resources:
- pods/eviction
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fleetlock
labels:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock
app.kubernetes.io/part-of: fleetlock
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fleetlock
subjects:
- kind: ServiceAccount
name: fleetlock
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: fleetlock
labels:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock
app.kubernetes.io/part-of: fleetlock
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: fleetlock
labels:
app.kubernetes.io/name: fleetlock
app.kubernetes.io/component: fleetlock
app.kubernetes.io/part-of: fleetlock
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: fleetlock
subjects:
- kind: ServiceAccount
name: fleetlock

1
grafana/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
ldap.password

6
grafana/README.md Normal file
View File

@@ -0,0 +1,6 @@
# Grafana
[Grafana][0] dashboards. Straightforward, single-instance deployment with
SQLite database (and thus a StatefulSet with a PersistentVolumeClaim).
[0]: https://grafana.com/

View File

@@ -0,0 +1,14 @@
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
url: https://loki.pyrocufflink.blue
jsonData:
tlsAuth: true
tlsAuthWithCACert: true
secureJsonData:
tlsCACert: $__file{/run/dch-ca/dch-root-ca.crt}
tlsClientCert: $__file{/run/secrets/du5t1n.me/loki/tls.crt}
tlsClientKey: $__file{/run/secrets/du5t1n.me/loki/tls.key}

860
grafana/grafana.ini Normal file
View File

@@ -0,0 +1,860 @@
##################### Grafana Configuration Defaults #####################
#
# Do not modify this file in grafana installs
#
# possible values : production, development
app_mode = production
# instance name, defaults to HOSTNAME environment variable value or hostname if HOSTNAME var is empty
instance_name = ${HOSTNAME}
#################################### Paths ###############################
[paths]
# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)
data = /var/lib/grafana
# Temporary files in `data` directory older than given duration will be removed
temp_data_lifetime = 24h
# Directory where grafana can store logs
logs = /var/log/grafana
# Directory where grafana will automatically scan and look for plugins
plugins = /var/lib/grafana/plugins
# folder that contains provisioning config files that grafana will apply on startup and while running.
provisioning = /etc/grafana/provisioning
#################################### Server ##############################
[server]
# Protocol (http, https, h2, socket)
protocol = http
# The ip address to bind to, empty will bind to all interfaces
http_addr =
# The http port to use
http_port = 3000
# The public facing domain name used to access grafana from a browser
domain = grafana.pyrocufflink.blue
# Redirect to correct domain if host header does not match domain
# Prevents DNS rebinding attacks
enforce_domain = false
# The full public facing url
root_url = %(protocol)s://%(domain)s:%(http_port)s/
# Serve Grafana from subpath specified in `root_url` setting. By default it is set to `false` for compatibility reasons.
serve_from_sub_path = false
# Log web requests
router_logging = false
# the path relative working path
static_root_path = public
# enable gzip
enable_gzip = false
# https certs & key file
cert_file =
cert_key =
# Unix socket path
socket = /tmp/grafana.sock
#################################### Database ############################
[database]
# You can configure the database connection by specifying type, host, name, user and password
# as separate properties or as on string using the url property.
# Either "mysql", "postgres" or "sqlite3", it's your choice
type = sqlite3
host = 127.0.0.1:3306
name = grafana
user = root
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
password =
# Use either URL or the previous fields to configure the database
# Example: mysql://user:secret@host:port/database
url =
# Max idle conn setting default is 2
max_idle_conn = 2
# Max conn setting default is 0 (mean not set)
max_open_conn =
# Connection Max Lifetime default is 14400 (means 14400 seconds or 4 hours)
conn_max_lifetime = 14400
# Set to true to log the sql calls and execution times.
log_queries =
# For "postgres", use either "disable", "require" or "verify-full"
# For "mysql", use either "true", "false", or "skip-verify".
ssl_mode = disable
ca_cert_path =
client_key_path =
client_cert_path =
server_cert_name =
# For "sqlite3" only, path relative to data_path setting
path = grafana.db
# For "sqlite3" only. cache mode setting used for connecting to the database
cache_mode = private
#################################### Cache server #############################
[remote_cache]
# Either "redis", "memcached" or "database" default is "database"
type = database
# cache connectionstring options
# database: will use Grafana primary database.
# redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=0,ssl=false`. Only addr is required. ssl may be 'true', 'false', or 'insecure'.
# memcache: 127.0.0.1:11211
connstr =
#################################### Data proxy ###########################
[dataproxy]
# This enables data proxy logging, default is false
logging = false
# How long the data proxy waits before timing out, default is 30 seconds.
# This setting also applies to core backend HTTP data sources where query requests use an HTTP client with timeout set.
timeout = 30
# How many seconds the data proxy waits before sending a keepalive request.
keep_alive_seconds = 30
# How many seconds the data proxy waits for a successful TLS Handshake before timing out.
tls_handshake_timeout_seconds = 10
# How many seconds the data proxy will wait for a server's first response headers after
# fully writing the request headers if the request has an "Expect: 100-continue"
# header. A value of 0 will result in the body being sent immediately, without
# waiting for the server to approve.
expect_continue_timeout_seconds = 1
# The maximum number of idle connections that Grafana will keep alive.
max_idle_connections = 100
# How many seconds the data proxy keeps an idle connection open before timing out.
idle_conn_timeout_seconds = 90
# If enabled and user is not anonymous, data proxy will add X-Grafana-User header with username into the request.
send_user_header = true
#################################### Analytics ###########################
[analytics]
# Server reporting, sends usage counters to stats.grafana.org every 24 hours.
# No ip addresses are being tracked, only simple counters to track
# running instances, dashboard and error counts. It is very helpful to us.
# Change this option to false to disable reporting.
reporting_enabled = false
# Set to false to disable all checks to https://grafana.com
# for new versions (grafana itself and plugins), check is used
# in some UI views to notify that grafana or plugin update exists
# This option does not cause any auto updates, nor send any information
# only a GET request to https://grafana.com to get latest versions
check_for_updates = false
# Google Analytics universal tracking code, only enabled if you specify an id here
google_analytics_ua_id =
# Google Tag Manager ID, only enabled if you specify an id here
google_tag_manager_id =
#################################### Security ############################
[security]
# disable creation of admin user on first start of grafana
disable_initial_admin_creation = false
# default admin user, created on startup
admin_user = admin
# default admin password, can be changed before first start of grafana, or in profile settings
admin_password = admin
# used for signing
secret_key = SW2YcwTIb9zpOOhoPsMm
# disable gravatar profile images
disable_gravatar = false
# data source proxy whitelist (ip_or_domain:port separated by spaces)
data_source_proxy_whitelist =
# disable protection against brute force login attempts
disable_brute_force_login_protection = false
# set to true if you host Grafana behind HTTPS. default is false.
cookie_secure = false
# set cookie SameSite attribute. defaults to `lax`. can be set to "lax", "strict", "none" and "disabled"
cookie_samesite = lax
# set to true if you want to allow browsers to render Grafana in a <frame>, <iframe>, <embed> or <object>. default is false.
allow_embedding = false
# Set to true if you want to enable http strict transport security (HSTS) response header.
# This is only sent when HTTPS is enabled in this configuration.
# HSTS tells browsers that the site should only be accessed using HTTPS.
strict_transport_security = false
# Sets how long a browser should cache HSTS. Only applied if strict_transport_security is enabled.
strict_transport_security_max_age_seconds = 86400
# Set to true if to enable HSTS preloading option. Only applied if strict_transport_security is enabled.
strict_transport_security_preload = false
# Set to true if to enable the HSTS includeSubDomains option. Only applied if strict_transport_security is enabled.
strict_transport_security_subdomains = false
# Set to true to enable the X-Content-Type-Options response header.
# The X-Content-Type-Options response HTTP header is a marker used by the server to indicate that the MIME types advertised
# in the Content-Type headers should not be changed and be followed.
x_content_type_options = true
# Set to true to enable the X-XSS-Protection header, which tells browsers to stop pages from loading
# when they detect reflected cross-site scripting (XSS) attacks.
x_xss_protection = true
#################################### Snapshots ###########################
[snapshots]
# snapshot sharing options
external_enabled = false
external_snapshot_url = https://snapshots-origin.raintank.io
external_snapshot_name = Publish to snapshot.raintank.io
# Set to true to enable this Grafana instance act as an external snapshot server and allow unauthenticated requests for
# creating and deleting snapshots.
public_mode = false
# remove expired snapshot
snapshot_remove_expired = true
#################################### Dashboards ##################
[dashboards]
# Number dashboard versions to keep (per dashboard). Default: 20, Minimum: 1
versions_to_keep = 20
# Minimum dashboard refresh interval. When set, this will restrict users to set the refresh interval of a dashboard lower than given interval. Per default this is 5 seconds.
# The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.
min_refresh_interval = 1s
# Path to the default home dashboard. If this value is empty, then Grafana uses StaticRootPath + "dashboards/home.json"
default_home_dashboard_path =
#################################### Users ###############################
[users]
# disable user signup / registration
allow_sign_up = false
# Allow non admin users to create organizations
allow_org_create = false
# Set to true to automatically assign new users to the default organization (id 1)
auto_assign_org = true
# Set this value to automatically add new users to the provided organization (if auto_assign_org above is set to true)
auto_assign_org_id = 1
# Default role new users will be automatically assigned (if auto_assign_org above is set to true)
auto_assign_org_role = Viewer
# Require email validation before sign up completes
verify_email_enabled = false
# Background text for the user field on the login page
login_hint = email or username
password_hint = password
# Default UI theme ("dark" or "light")
default_theme = dark
# External user management
external_manage_link_url =
external_manage_link_name =
external_manage_info =
# Viewers can edit/inspect dashboard settings in the browser. But not save the dashboard.
viewers_can_edit = false
# Editors can administrate dashboard, folders and teams they create
editors_can_admin = false
# The duration in time a user invitation remains valid before expiring. This setting should be expressed as a duration. Examples: 6h (hours), 2d (days), 1w (week). Default is 24h (24 hours). The minimum supported duration is 15m (15 minutes).
user_invite_max_lifetime_duration = 24h
[auth]
# Login cookie name
login_cookie_name = grafana_session
# The maximum lifetime (duration) an authenticated user can be inactive before being required to login at next visit. Default is 7 days (7d). This setting should be expressed as a duration, e.g. 5m (minutes), 6h (hours), 10d (days), 2w (weeks), 1M (month). The lifetime resets at each successful token rotation (token_rotation_interval_minutes).
login_maximum_inactive_lifetime_duration =
# The maximum lifetime (duration) an authenticated user can be logged in since login time before being required to login. Default is 30 days (30d). This setting should be expressed as a duration, e.g. 5m (minutes), 6h (hours), 10d (days), 2w (weeks), 1M (month).
login_maximum_lifetime_duration =
# How often should auth tokens be rotated for authenticated users when being active. The default is each 10 minutes.
token_rotation_interval_minutes = 10
# Set to true to disable (hide) the login form, useful if you use OAuth
disable_login_form = false
# Set to true to disable the signout link in the side menu. useful if you use auth.proxy
disable_signout_menu = false
# URL to redirect the user to after sign out
signout_redirect_url =
# Set to true to attempt login with OAuth automatically, skipping the login screen.
# This setting is ignored if multiple OAuth providers are configured.
oauth_auto_login = false
# OAuth state max age cookie duration in seconds. Defaults to 600 seconds.
oauth_state_cookie_max_age = 600
# limit of api_key seconds to live before expiration
api_key_max_seconds_to_live = -1
# Set to true to enable SigV4 authentication option for HTTP-based datasources
sigv4_auth_enabled = false
#################################### Anonymous Auth ######################
[auth.anonymous]
# enable anonymous access
enabled = true
# specify organization name that should be used for unauthenticated users
org_name = Main Org.
# specify role for unauthenticated users
org_role = Viewer
# mask the Grafana version number for unauthenticated users
hide_version = false
#################################### GitHub Auth #########################
[auth.github]
enabled = false
allow_sign_up = true
client_id = some_id
client_secret =
scopes = user:email,read:org
auth_url = https://github.com/login/oauth/authorize
token_url = https://github.com/login/oauth/access_token
api_url = https://api.github.com/user
allowed_domains =
team_ids =
allowed_organizations =
#################################### GitLab Auth #########################
[auth.gitlab]
enabled = false
allow_sign_up = true
client_id = some_id
client_secret =
scopes = api
auth_url = https://gitlab.com/oauth/authorize
token_url = https://gitlab.com/oauth/token
api_url = https://gitlab.com/api/v4
allowed_domains =
allowed_groups =
#################################### Google Auth #########################
[auth.google]
enabled = false
allow_sign_up = true
client_id = some_client_id
client_secret =
scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email
auth_url = https://accounts.google.com/o/oauth2/auth
token_url = https://accounts.google.com/o/oauth2/token
api_url = https://www.googleapis.com/oauth2/v1/userinfo
allowed_domains =
hosted_domain =
#################################### Grafana.com Auth ####################
# legacy key names (so they work in env variables)
[auth.grafananet]
enabled = false
allow_sign_up = true
client_id = some_id
client_secret =
scopes = user:email
allowed_organizations =
[auth.grafana_com]
enabled = false
allow_sign_up = true
client_id = some_id
client_secret =
scopes = user:email
allowed_organizations =
#################################### Azure AD OAuth #######################
[auth.azuread]
name = Azure AD
enabled = false
allow_sign_up = true
client_id = some_client_id
client_secret =
scopes = openid email profile
auth_url = https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/authorize
token_url = https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/token
allowed_domains =
allowed_groups =
#################################### Okta OAuth #######################
[auth.okta]
name = Okta
enabled = false
allow_sign_up = true
client_id = some_id
client_secret =
scopes = openid profile email groups
auth_url = https://<tenant-id>.okta.com/oauth2/v1/authorize
token_url = https://<tenant-id>.okta.com/oauth2/v1/token
api_url = https://<tenant-id>.okta.com/oauth2/v1/userinfo
allowed_domains =
allowed_groups =
role_attribute_path =
#################################### Generic OAuth #######################
[auth.generic_oauth]
name = OAuth
enabled = false
allow_sign_up = true
client_id = some_id
client_secret =
scopes = user:email
email_attribute_name = email:primary
email_attribute_path =
login_attribute_path =
role_attribute_path =
id_token_attribute_name =
auth_url =
token_url =
api_url =
allowed_domains =
team_ids =
allowed_organizations =
tls_skip_verify_insecure = false
tls_client_cert =
tls_client_key =
tls_client_ca =
#################################### Basic Auth ##########################
[auth.basic]
enabled = true
#################################### Auth Proxy ##########################
[auth.proxy]
enabled = false
header_name = X-WEBAUTH-USER
header_property = username
auto_sign_up = true
# Deprecated, use sync_ttl instead
ldap_sync_ttl = 60
sync_ttl = 60
whitelist =
headers =
enable_login_token = false
#################################### Auth LDAP ###########################
[auth.ldap]
enabled = true
config_file = /etc/grafana/ldap.toml
allow_sign_up = false
# LDAP backround sync (Enterprise only)
# At 1 am every day
sync_cron = "0 0 1 * * *"
active_sync_enabled = false
#################################### SMTP / Emailing #####################
[smtp]
enabled = false
host = localhost:25
user =
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
password =
cert_file =
key_file =
skip_verify = false
from_address = admin@grafana.localhost
from_name = Grafana
ehlo_identity =
startTLS_policy =
[emails]
welcome_email_on_sign_up = false
templates_pattern = emails/*.html
#################################### Logging ##########################
[log]
# Either "console", "file", "syslog". Default is console and file
# Use space to separate multiple modes, e.g. "console file"
mode = console
# Either "debug", "info", "warn", "error", "critical", default is "info"
level = info
# optional settings to set different levels for specific loggers. Ex filters = sqlstore:debug
filters =
# For "console" mode only
[log.console]
level =
# log line format, valid options are text, console and json
format = console
# For "file" mode only
[log.file]
level =
# log line format, valid options are text, console and json
format = text
# This enables automated log rotate(switch of following options), default is true
log_rotate = true
# Max line number of single file, default is 1000000
max_lines = 1000000
# Max size shift of single file, default is 28 means 1 << 28, 256MB
max_size_shift = 28
# Segment log daily, default is true
daily_rotate = true
# Expired days of log file(delete after max days), default is 7
max_days = 7
[log.syslog]
level =
# log line format, valid options are text, console and json
format = text
# Syslog network type and address. This can be udp, tcp, or unix. If left blank, the default unix endpoints will be used.
network =
address =
# Syslog facility. user, daemon and local0 through local7 are valid.
facility =
# Syslog tag. By default, the process' argv[0] is used.
tag =
#################################### Usage Quotas ########################
[quota]
enabled = false
#### set quotas to -1 to make unlimited. ####
# limit number of users per Org.
org_user = 10
# limit number of dashboards per Org.
org_dashboard = 100
# limit number of data_sources per Org.
org_data_source = 10
# limit number of api_keys per Org.
org_api_key = 10
# limit number of orgs a user can create.
user_org = 10
# Global limit of users.
global_user = -1
# global limit of orgs.
global_org = -1
# global limit of dashboards
global_dashboard = -1
# global limit of api_keys
global_api_key = -1
# global limit on number of logged in users.
global_session = -1
#################################### Alerting ############################
[alerting]
# Disable alerting engine & UI features
enabled = true
# Makes it possible to turn off alert rule execution but alerting UI is visible
execute_alerts = true
# Default setting for new alert rules. Defaults to categorize error and timeouts as alerting. (alerting, keep_state)
error_or_timeout = alerting
# Default setting for how Grafana handles nodata or null values in alerting. (alerting, no_data, keep_state, ok)
nodata_or_nullvalues = no_data
# Alert notifications can include images, but rendering many images at the same time can overload the server
# This limit will protect the server from render overloading and make sure notifications are sent out quickly
concurrent_render_limit = 5
# Default setting for alert calculation timeout. Default value is 30
evaluation_timeout_seconds = 30
# Default setting for alert notification timeout. Default value is 30
notification_timeout_seconds = 30
# Default setting for max attempts to sending alert notifications. Default value is 3
max_attempts = 3
# Makes it possible to enforce a minimal interval between evaluations, to reduce load on the backend
min_interval_seconds = 1
# Configures for how long alert annotations are stored. Default is 0, which keeps them forever.
# This setting should be expressed as an duration. Ex 6h (hours), 10d (days), 2w (weeks), 1M (month).
max_annotation_age =
# Configures max number of alert annotations that Grafana stores. Default value is 0, which keeps all alert annotations.
max_annotations_to_keep =
#################################### Annotations #########################
[annotations.dashboard]
# Dashboard annotations means that annotations are associated with the dashboard they are created on.
# Configures how long dashboard annotations are stored. Default is 0, which keeps them forever.
# This setting should be expressed as a duration. Examples: 6h (hours), 10d (days), 2w (weeks), 1M (month).
max_age =
# Configures max number of dashboard annotations that Grafana stores. Default value is 0, which keeps all dashboard annotations.
max_annotations_to_keep =
[annotations.api]
# API annotations means that the annotations have been created using the API without any
# association with a dashboard.
# Configures how long Grafana stores API annotations. Default is 0, which keeps them forever.
# This setting should be expressed as a duration. Examples: 6h (hours), 10d (days), 2w (weeks), 1M (month).
max_age =
# Configures max number of API annotations that Grafana keeps. Default value is 0, which keeps all API annotations.
max_annotations_to_keep =
#################################### Explore #############################
[explore]
# Enable the Explore section
enabled = true
#################################### Internal Grafana Metrics ############
# Metrics available at HTTP API Url /metrics
[metrics]
enabled = true
interval_seconds = 10
# Disable total stats (stat_totals_*) metrics to be generated
disable_total_stats = false
#If both are set, basic auth will be required for the metrics endpoint.
basic_auth_username =
basic_auth_password =
# Metrics environment info adds dimensions to the `grafana_environment_info` metric, which
# can expose more information about the Grafana instance.
[metrics.environment_info]
#exampleLabel1 = exampleValue1
#exampleLabel2 = exampleValue2
# Send internal Grafana metrics to graphite
[metrics.graphite]
# Enable by setting the address setting (ex localhost:2003)
address =
prefix = prod.grafana.%(instance_name)s.
#################################### Grafana.com integration ##########################
[grafana_net]
url = https://grafana.com
[grafana_com]
url = https://grafana.com
#################################### Distributed tracing ############
[tracing.jaeger]
# jaeger destination (ex localhost:6831)
address =
# tag that will always be included in when creating new spans. ex (tag1:value1,tag2:value2)
always_included_tag =
# Type specifies the type of the sampler: const, probabilistic, rateLimiting, or remote
sampler_type = const
# jaeger samplerconfig param
# for "const" sampler, 0 or 1 for always false/true respectively
# for "probabilistic" sampler, a probability between 0 and 1
# for "rateLimiting" sampler, the number of spans per second
# for "remote" sampler, param is the same as for "probabilistic"
# and indicates the initial sampling rate before the actual one
# is received from the mothership
sampler_param = 1
# sampling_server_url is the URL of a sampling manager providing a sampling strategy.
sampling_server_url =
# Whether or not to use Zipkin span propagation (x-b3- HTTP headers).
zipkin_propagation = false
# Setting this to true disables shared RPC spans.
# Not disabling is the most common setting when using Zipkin elsewhere in your infrastructure.
disable_shared_zipkin_spans = false
#################################### External Image Storage ##############
[external_image_storage]
# Used for uploading images to public servers so they can be included in slack/email messages.
# You can choose between (s3, webdav, gcs, azure_blob, local)
provider =
[external_image_storage.s3]
endpoint =
path_style_access =
bucket_url =
bucket =
region =
path =
access_key =
secret_key =
[external_image_storage.webdav]
url =
username =
password =
public_url =
[external_image_storage.gcs]
key_file =
bucket =
path =
enable_signed_urls = false
signed_url_expiration =
[external_image_storage.azure_blob]
account_name =
account_key =
container_name =
[external_image_storage.local]
# does not require any configuration
[rendering]
# Options to configure a remote HTTP image rendering service, e.g. using https://github.com/grafana/grafana-image-renderer.
# URL to a remote HTTP image renderer service, e.g. http://localhost:8081/render, will enable Grafana to render panels and dashboards to PNG-images using HTTP requests to an external service.
server_url =
# If the remote HTTP image renderer service runs on a different server than the Grafana server you may have to configure this to a URL where Grafana is reachable, e.g. http://grafana.domain/.
callback_url =
# Concurrent render request limit affects when the /render HTTP endpoint is used. Rendering many images at the same time can overload the server,
# which this setting can help protect against by only allowing a certain amount of concurrent requests.
concurrent_render_request_limit = 30
[panels]
# here for to support old env variables, can remove after a few months
enable_alpha = false
disable_sanitize_html = false
[plugins]
enable_alpha = false
app_tls_skip_verify_insecure = false
# Enter a comma-separated list of plugin identifiers to identify plugins that are allowed to be loaded even if they lack a valid signature.
allow_loading_unsigned_plugins = pcp-redis-datasource
marketplace_url = https://grafana.com/grafana/plugins/
#################################### Grafana Image Renderer Plugin ##########################
[plugin.grafana-image-renderer]
# Instruct headless browser instance to use a default timezone when not provided by Grafana, e.g. when rendering panel image of alert.
# See ICUs metaZones.txt (https://cs.chromium.org/chromium/src/third_party/icu/source/data/misc/metaZones.txt) for a list of supported
# timezone IDs. Fallbacks to TZ environment variable if not set.
rendering_timezone =
# Instruct headless browser instance to use a default language when not provided by Grafana, e.g. when rendering panel image of alert.
# Please refer to the HTTP header Accept-Language to understand how to format this value, e.g. 'fr-CH, fr;q=0.9, en;q=0.8, de;q=0.7, *;q=0.5'.
rendering_language =
# Instruct headless browser instance to use a default device scale factor when not provided by Grafana, e.g. when rendering panel image of alert.
# Default is 1. Using a higher value will produce more detailed images (higher DPI), but will require more disk space to store an image.
rendering_viewport_device_scale_factor =
# Instruct headless browser instance whether to ignore HTTPS errors during navigation. Per default HTTPS errors are not ignored. Due to
# the security risk it's not recommended to ignore HTTPS errors.
rendering_ignore_https_errors =
# Instruct headless browser instance whether to capture and log verbose information when rendering an image. Default is false and will
# only capture and log error messages. When enabled, debug messages are captured and logged as well.
# For the verbose information to be included in the Grafana server log you have to adjust the rendering log level to debug, configure
# [log].filter = rendering:debug.
rendering_verbose_logging =
# Instruct headless browser instance whether to output its debug and error messages into running process of remote rendering service.
# Default is false. This can be useful to enable (true) when troubleshooting.
rendering_dumpio =
# Additional arguments to pass to the headless browser instance. Default is --no-sandbox. The list of Chromium flags can be found
# here (https://peter.sh/experiments/chromium-command-line-switches/). Multiple arguments is separated with comma-character.
rendering_args =
# You can configure the plugin to use a different browser binary instead of the pre-packaged version of Chromium.
# Please note that this is not recommended, since you may encounter problems if the installed version of Chrome/Chromium is not
# compatible with the plugin.
rendering_chrome_bin =
# Instruct how headless browser instances are created. Default is 'default' and will create a new browser instance on each request.
# Mode 'clustered' will make sure that only a maximum of browsers/incognito pages can execute concurrently.
# Mode 'reusable' will have one browser instance and will create a new incognito page on each request.
rendering_mode =
# When rendering_mode = clustered you can instruct how many browsers or incognito pages can execute concurrently. Default is 'browser'
# and will cluster using browser instances.
# Mode 'context' will cluster using incognito pages.
rendering_clustering_mode =
# When rendering_mode = clustered you can define maximum number of browser instances/incognito pages that can execute concurrently..
rendering_clustering_max_concurrency =
# Limit the maximum viewport width, height and device scale factor that can be requested.
rendering_viewport_max_width =
rendering_viewport_max_height =
rendering_viewport_max_device_scale_factor =
# Change the listening host and port of the gRPC server. Default host is 127.0.0.1 and default port is 0 and will automatically assign
# a port not in use.
grpc_host =
grpc_port =
[enterprise]
license_path =
[feature_toggles]
# enable features, separated by spaces
enable =
[date_formats]
# For information on what formatting patterns that are supported https://momentjs.com/docs/#/displaying/
# Default system date format used in time range picker and other places where full time is displayed
full_date = YYYY-MM-DD HH:mm:ss
# Used by graph and other places where we only show small intervals
interval_second = HH:mm:ss
interval_minute = HH:mm
interval_hour = MM/DD HH:mm
interval_day = MM/DD
interval_month = YYYY-MM
interval_year = YYYY
# Experimental feature
use_browser_locale = false
# Default timezone for user preferences. Options are 'browser' for the browser local timezone or a timezone name from IANA Time Zone database, e.g. 'UTC' or 'Europe/Amsterdam' etc.
default_timezone = browser

101
grafana/grafana.yaml Normal file
View File

@@ -0,0 +1,101 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana
labels:
app.kubernetes.io/name: grafana
app.kubernetes.io/component: grafana
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: grafana
labels:
app.kubernetes.io/name: grafana
app.kubernetes.io/component: grafana
spec:
ports:
- port: 3000
name: grafana
selector:
app.kubernetes.io/name: grafana
app.kubernetes.io/component: grafana
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: grafana
labels:
app.kubernetes.io/name: grafana
app.kubernetes.io/component: grafana
spec:
serviceName: grafana
selector:
matchLabels:
app.kubernetes.io/name: grafana
app.kubernetes.io/component: grafana
template:
metadata:
labels:
app.kubernetes.io/name: grafana
app.kubernetes.io/component: grafana
spec:
containers:
- name: grafana
image: docker.io/grafana/grafana:10.2.3
ports:
- containerPort: 3000
name: http
readinessProbe: &probe
httpGet:
port: http
path: /api/health
periodSeconds: 60
startupProbe:
<<: *probe
periodSeconds: 1
successThreshold: 1
failureThreshold: 30
timeoutSeconds: 1
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/grafana
name: config
readOnly: true
- mountPath: /etc/grafana/provisioning/datasources
name: datasources
readOnly: true
- mountPath: /run/secrets/grafana
name: secrets
readOnly: true
- mountPath: /var/lib/grafana
name: grafana
subPath: data
securityContext:
fsGroup: 472
runAsNonRoot: true
volumes:
- name: config
configMap:
name: grafana
- name: datasources
configMap:
name: datasources
optional: true
- name: grafana
persistentVolumeClaim:
claimName: grafana
- name: secrets
secret:
secretName: grafana

19
grafana/ingress.yaml Normal file
View File

@@ -0,0 +1,19 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
labels:
app.kubernetes.io/name: grafana
app.kubernetes.io/component: grafana
spec:
rules:
- host: grafana.pyrocufflink.blue
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grafana
port:
name: grafana

View File

@@ -0,0 +1,56 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: grafana
labels:
- pairs:
app.kubernetes.io/instance: grafana
includeSelectors: true
- pairs:
app.kubernetes.io/part-of: grafana
includeSelectors: false
resources:
- namespace.yaml
- grafana.yaml
- ingress.yaml
- secrets.yaml
- loki-cert.yaml
- ../dch-root-ca
configMapGenerator:
- name: grafana
files:
- grafana.ini
- ldap.toml
- name: datasources
files:
- datasources/loki.yml
patches:
- patch: |-
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: grafana
spec:
template:
spec:
containers:
- name: grafana
volumeMounts:
- mountPath: /run/dch-ca
name: dch-ca
readOnly: true
- mountPath: /run/secrets/du5t1n.me/loki
name: loki-client-cert
readOnly: true
volumes:
- name: dch-ca
configMap:
name: dch-root-ca
- name: loki-client-cert
secret:
secretName: loki-client-cert

55
grafana/ldap.toml Normal file
View File

@@ -0,0 +1,55 @@
# To troubleshoot and get more log info enable ldap debug logging in grafana.ini
# [log]
# filters = ldap:debug
[[servers]]
# Ldap server host (specify multiple hosts space separated)
host = "pyrocufflink.blue"
# Default port is 389 or 636 if use_ssl = true
port = 389
# Set to true if ldap server supports TLS
use_ssl = true
# Set to true if connect ldap server with STARTTLS pattern (create connection in insecure, then upgrade to secure connection with TLS)
start_tls = true
# set to true if you want to skip ssl cert validation
ssl_skip_verify = false
# set to the path to your root CA certificate or leave unset to use system defaults
root_ca_cert = "/run/dch-ca/dch-root-ca.crt"
# Authentication against LDAP servers requiring client certificates
# client_cert = "/path/to/client.crt"
# client_key = "/path/to/client.key"
# Search user bind dn
bind_dn = "CN=svc.grafana,CN=Users,DC=pyrocufflink,DC=blue"
# Search user bind password
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
bind_password = '$__file{/run/secrets/grafana/ldap.password}'
# User search filter, for example "(cn=%s)" or "(sAMAccountName=%s)" or "(uid=%s)"
search_filter = "(sAMAccountName=%s)"
# An array of base dns to search through
search_base_dns = ["DC=pyrocufflink,DC=blue"]
## For Posix or LDAP setups that does not support member_of attribute you can define the below settings
## Please check grafana LDAP docs for examples
# group_search_filter = "(&(objectClass=posixGroup)(memberUid=%s))"
# group_search_base_dns = ["ou=groups,dc=grafana,dc=org"]
# group_search_filter_user_attribute = "uid"
# Specify names of the ldap attributes your ldap uses
[servers.attributes]
name = "givenName"
surname = "sn"
username = "sAMAccountName"
member_of = "memberOf"
email = "mail"
# Map ldap groups to grafana org roles
[[servers.group_mappings]]
group_dn = "CN=Grafana Admins,CN=Users,DC=pyrocufflink,DC=blue"
org_role = "Admin"
grafana_admin = true
[[servers.group_mappings]]
group_dn = "*"
org_role = "Viewer"

12
grafana/loki-cert.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: loki-client-cert
spec:
commonName: grafana
privateKey:
algorithm: Ed25519
secretName: loki-client-cert
issuerRef:
name: loki-ca
kind: ClusterIssuer

6
grafana/namespace.yaml Normal file
View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: grafana
labels:
app.kubernetes.io/name: grafana

18
grafana/secrets.yaml Normal file
View File

@@ -0,0 +1,18 @@
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: grafana
namespace: grafana
labels:
app.kubernetes.io/name: grafana
app.kubernetes.io/component: grafana
spec:
encryptedData:
ldap.password: AgAPlAsgFK6eKUGeHJymXHgSXbXaKm8uoFwmSgGV0RnXdzhQwKgd67XlXjuN3UHQL6LL6kAz44qiN7C4E21ZZ+WgFfs7CrLGAh1mQ713OH5W9j9CdpH5QdxDdqgmOq2ZBleijKb/XWZdMW6kZZLK03SzjZk80FHR7YaWKnJOsvVj6QQS5ZAamp0sv4wnQnhK86uZvn6cbtIgjXU/bhGiibCh1Gj/c/2rr0aPHAD3NZy6HuLH43SJN2LAvVwL6BQYoxLJ7Af680+tdWnqoYNNHexKphUWvQlfEXYnvS1JXPBynHsFHxBD2xlU/sFnvqJPN6YCu9LXbzGGacZreJX5giKYt5mAuotpvsF/59QzVEW5U6l0f0felOPSeaBvfl8mHkq2ude5SuLedtgaZKMTaI4QM6DcPmjlUoZU5sizDP9fdsw2iuhyEV5tJprmm2p9tuzFXsV/NY7L79m6iJMh0VTfpbglkec8R7f1im04sbBDR6v/Quw3U3R1erndzymZIGzjG7qKwnGzAvNhTTW+9FOLjCBVhE1Jk6PfX0efAkL9UiB3Vo5qpJw535rZa4KP9KfkGiJE8RhsdIqRMnMo2wPzRkiJF+eb+P3v0eIkAeBpPdioK3MI02d+KS+vgGu9cuVeq1jCONkfIfp3EYnxGsu3ZPWyZRzbTwS6m4XV6Ov4NMq5YpjjpnCnMHq3nFg+H14WLqs68TIgNeWwhRJyqhnvpWA+/A6GC6PHzeELpL3xAg==
template:
metadata:
name: grafana
namespace: grafana
labels:
app.kubernetes.io/name: grafana
app.kubernetes.io/component: grafana

View File

@@ -33,7 +33,7 @@ http:
use_x_forwarded_for: true
recorder:
db_url: !env_var RECORDER_DB_URL
db_url: postgresql://
db_max_retries: 100
purge_keep_days: 366
commit_interval: 0
@@ -54,6 +54,7 @@ automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml
shell_command: !include /run/config/shell-command.yaml
rest_command: !include /run/config/rest-command.yaml
lovelace:
mode: storage
@@ -120,6 +121,10 @@ sensor:
max_age:
hours: 24
- platform: seventeentrack
username: gyrfalcon@ebonfire.com
password: !secret seventeentrack_password
template:
- sensor:
- name: 'Thermostat Temperature'
@@ -269,21 +274,14 @@ switch:
mac: e0:d5:5e:6e:ad:ac
broadcast_address: 172.30.0.63
binary_sensor:
- platform: template
sensors:
roomba_is_downstairs:
friendly_name: Roomba is Downstairs
value_template: >-
{% if is_state('binary_sensor.roomba_ibeacon_ble_presence', 'on') and
states('sensor.roomba_ibeacon_ble_rssi') | float > -70 %}
on
{% else %}
off
{% endif %}
prometheus:
filter:
exclude_entity_globs:
- binary_sensor.node_14*
- binary_sensor.node_15*
calendar:
- platform: caldav
url: https://nextcloud.pyrocufflink.net/remote.php/dav/public-calendars/pSJDP6RYazMYPQxB?export
- platform: caldav
url: https://nextcloud.pyrocufflink.net/remote.php/dav/public-calendars/BZtERJTLi7rK27of?export

View File

@@ -12,4 +12,5 @@ watch_view:
- light.back_porch_light
- light.back_porch_flood_light
- light.garage_lights
- script.start_time_to_go_timer
name: Watch View

View File

@@ -10,6 +10,7 @@ labels:
resources:
- namespace.yaml
- secrets.yaml
- postgres-cert.yaml
- home-assistant.yaml
- mosquitto-cert.yaml
- mosquitto.yaml
@@ -18,6 +19,7 @@ resources:
- piper.yaml
- whisper.yaml
- ingress.yaml
- ../dch-root-ca
configMapGenerator:
- name: home-assistant
@@ -27,6 +29,7 @@ configMapGenerator:
- groups.yaml
- restart-diddy-mopidy.sh
- shell-command.yaml
- rest-command.yaml
options:
disableNameSuffixHash: true
labels:
@@ -38,6 +41,10 @@ configMapGenerator:
files:
- mosquitto.conf
- name: zigbee2mqtt
envs:
- zigbee2mqtt.env
patches:
- patch: |-
apiVersion: apps/v1
@@ -54,43 +61,42 @@ patches:
- sh
- -c
- until pg_isready; do sleep 1; done
env:
env: &pgsqlenv
- name: PGHOST
value: default.postgresql
value: postgresql.pyrocufflink.blue
- name: PGGDATABASE
value: homeassistant
- name: PGUSER
valueFrom:
secretKeyRef:
name: home-assistant.homeassistant.default.credentials.postgresql.acid.zalan.do
key: username
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: home-assistant.homeassistant.default.credentials.postgresql.acid.zalan.do
key: password
value: homeassistant
- name: PGSSLMODE
value: verify-full
- name: PGSSLROOTCERT
value: /run/dch-ca/dch-root-ca.crt
- name: PGSSLCERT
value: /run/secrets/home-assistant/postgresql/tls.crt
- name: PGSSLKEY
value: /run/secrets/home-assistant/postgresql/tls.key
volumeMounts:
- mountPath: /run/dch-ca/
name: dch-root-ca
readOnly: true
- mountPath: /run/secrets/home-assistant/postgresql
name: postgresql-cert
containers:
- name: home-assistant
env:
- name: RECORDER_DB_PASSWORD
valueFrom:
secretKeyRef:
name: home-assistant.homeassistant.default.credentials.postgresql.acid.zalan.do
key: password
- name: RECORDER_DB_USERNAME
valueFrom:
secretKeyRef:
name: home-assistant.homeassistant.default.credentials.postgresql.acid.zalan.do
key: username
- name: RECORDER_DB_URL
value: postgresql://$(RECORDER_DB_USERNAME):$(RECORDER_DB_PASSWORD)@default.postgresql/homeassistant
env: *pgsqlenv
volumeMounts:
- mountPath: /run/config
name: home-assistant-config
readOnly: true
- mountPath: /run/dch-ca/
name: dch-root-ca
readOnly: true
- mountPath: /run/secrets/home-assistant
name: home-assistant-secrets
readOnly: true
- mountPath: /run/secrets/home-assistant/postgresql
name: postgresql-cert
volumes:
- name: home-assistant-config
configMap:
@@ -100,3 +106,10 @@ patches:
secret:
secretName: home-assistant
defaultMode: 0640
- name: postgresql-cert
secret:
secretName: postgres-client-cert
defaultMode: 0640
- name: dch-root-ca
configMap:
name: dch-root-ca

View File

@@ -0,0 +1,13 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: postgres-client-cert
spec:
commonName: homeassistant
privateKey:
algorithm: ECDSA
secretName: postgres-client-cert
issuerRef:
name: postgresql-ca
kind: ClusterIssuer

View File

@@ -0,0 +1,7 @@
photoframe_next:
url: https://photos.pyrocufflink.blue/next
method: post
photoframe_prev:
url: https://photos.pyrocufflink.blue/prev
method: post

View File

@@ -0,0 +1 @@
ZIGBEE2MQTT_CONFIG_MQTT_SERVER=mqtts://mqtt.pyrocufflink.blue:8883

View File

@@ -61,6 +61,10 @@ spec:
containers:
- name: zigbee2mqtt
image: docker.io/koenkk/zigbee2mqtt:1.33.1
envFrom:
- configMapRef:
name: zigbee2mqtt
optional: true
ports:
- containerPort: 8080
name: http

View File

@@ -31,15 +31,6 @@ metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
8883: home-assistant/mosquitto:8883
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:

View File

@@ -0,0 +1,8 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ingress-nginx
resources:
- ingress-nginx.yaml
- tcp-services.yaml

View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
data:
'8883': home-assistant/mosquitto:8883
'5671': rabbitmq/rabbitmq:5671

72
invoice-ninja/README.md Normal file
View File

@@ -0,0 +1,72 @@
# Invoice Ninja
[Invoice Ninja][0] is a free invoice and customer management system. Tabitha
uses it to manage her tutoring and learning center billing and payments.
[0]: https://www.invoiceninja.org/
## Components
*Invoice Ninja* is a web-based application, written in PHP. The official
container image only includes the application itself and PHP-FPM, but no HTTP
server, so a separate *nginx* container is necessary. The image is also of
dubious quality, doing weird things like copying "backup" files to persistent
storage at startup, then deleting them from the container filesystem. To
work around this, an init container is necessary to copy the application into
writable ephemeral storage.
Persistent storage is handled in a somewhat ad-hoc way. There are three paths
that are expected to be persistent:
* `/var/www/app/public`
* `/var/www/app/storage`
* `/var/www/app/public/storage`
The distinction between these is not really clear. Both "public" directories
have to be served by the web server, as well.
In addition to the main process, a "cron" process is required. This has to
run every minute, apparently.
*Invoice Ninja* also requires a MySQL or MariaDB database. Supposedly,
PostgreSQL can be used as well, but it is not supported by upstream and
apparently requires patching some PHP code.
## Phone Home
Although *Invoice Ninja* can be self hosed, it relies on some cloud services
for some features. Notably, generating PDF invoices makes a few connections to
external services:
* *fonts.googleapis.com*: Fetches CSS resources
* *invoicing.io*: Fetches the *Invoice Ninja* logo to print at the bottom
Both of these remote resources are hard-coded into the HTML document template
that is used to render the PDF. The former is probably innocent, but I suspect
the latter is some kind of "phone home," informing upstream of field deployments.
Additionally, when certain actions are performed in the web UI, the backend
makes requests to *www.google-analytics.com*, obviously for telemetry.
Further, the *Invoice Ninja* documentation lists some "terms of service" for
self-hosting, which include sending personally identifiable information to
the *Invoice Ninja*, including company name and contact information, email
addresses, etc.
The point of self-hosting applications is not to avoid paying for them (in
fact, I pay for some cloud services offered by open source developers, even
though I self-host their software), but to avoid dependencies on cloud
services. For *Invoice Ninja*, that means we should be able to make invoices
any time, even if upstream ceases offering their cloud service. Including a
"phone home" in the invoice generation that can prevent the feature from
working, even if it is by accident, is unacceptable.
To that end, I have neutered *Invoice Ninja*'s phone-home capabilities. First,
a script runs before the main container starts that replaces the hard-coded
URL of the *Invoice Ninja* logo with the URL to the same logo in the local
installation. Next, I have blocked all outbound communication from *Invoice
Ninja* pods using a NetworkPolicy, except for Kubernetes services and the
forward proxy on the firewall. Finally, I have configured the forward proxy
(Squid) on the firewall to *only* allow access to *fonts.googleapis.com*, so
that invoices render correctly, blocking all telemetry and other phone-home
communication.

View File

@@ -0,0 +1,48 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: invoice-ninja
labels:
app.kubernetes.io/name: invoice-ninja
app.kubernetes.io/component: invoice-ninja
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 40m
spec:
rules:
- host: invoiceninja.pyrocufflink.blue
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: invoice-ninja
port:
name: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hlc-client-portal
labels:
app.kubernetes.io/name: hlc-client-portal
app.kubernetes.io/component: invoice-ninja
annotations:
cert-manager.io/cluster-issuer: zerossl
spec:
tls:
- hosts:
- billing.hatchlearningcenter.org
secretName: hlc-client-portal-cert
rules:
- host: billing.hatchlearningcenter.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: invoice-ninja
port:
name: http

View File

@@ -0,0 +1,16 @@
APP_LOGO=https://invoiceninja.pyrocufflink.blue/images/logo.png
APP_URL=https://invoiceninja.pyrocufflink.blue
TRUSTED_PROXIES=172.30.0.171,172.30.0.172,172.30.0.173
MAIL_MAILER=smtp
MAIL_HOST=mail.pyrocufflink.blue
MAIL_PORT=25
MAIL_ENCRYPTION=null
MAIL_FROM_ADDRESS=invoice-ninja@pyrocufflink.net
MAIL_FROM_NAME='Invoice Ninja'
EXPANDED_LOGGING=true
http_proxy=http://172.30.0.1:3128
https_proxy=http://172.30.0.1:3128
NO_PROXY=local,pyrocufflink.blue,localhost

View File

@@ -0,0 +1,201 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: invoice-ninja
labels:
app.kubernetes.io/name: invoice-ninja
app.kubernetes.io/component: invoice-ninja
app.kubernetes.io/part-of: invoice-ninja
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3816Mi
storageClassName: longhorn-static
---
apiVersion: v1
kind: Service
metadata:
name: invoice-ninja
labels:
app.kubernetes.io/name: invoice-ninja
app.kubernetes.io/component: invoice-ninja
app.kubernetes.io/part-of: invoice-ninja
spec:
ports:
- port: 8000
targetPort: http
selector:
app.kubernetes.io/name: invoice-ninja
app.kubernetes.io/component: invoice-ninja
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: invoice-ninja
labels:
app.kubernetes.io/name: invoice-ninja
app.kubernetes.io/component: invoice-ninja
app.kubernetes.io/part-of: invoice-ninja
spec:
selector:
matchLabels:
app.kubernetes.io/name: invoice-ninja
app.kubernetes.io/component: invoice-ninja
template:
metadata:
labels:
app.kubernetes.io/name: invoice-ninja
app.kubernetes.io/component: invoice-ninja
app.kubernetes.io/part-of: invoice-ninja
spec:
containers:
- name: invoice-ninja
image: &image docker.io/invoiceninja/invoiceninja:5.8.16
command:
- /start.sh
env: &env
- name: DB_HOST
value: invoice-ninja-db
- name: DB_DATABASE
value: ninja
- name: DB_USERNAME
value: ninja
- name: DB_PASSWORD_FILE
value: /run/secrets/invoiceninja/db.password
- name: APP_KEY_FILE
value: /run/secrets/invoiceninja/app.key
- name: APP_CIPHER
value: AES-256-GCM
- name: TRUSTED_PROXIES
value: '*'
envFrom: &envFrom
- configMapRef:
name: invoice-ninja
readinessProbe: &probe
tcpSocket:
port: 9000
periodSeconds: 60
startupProbe:
<<: *probe
periodSeconds: 1
failureThreshold: 60
volumeMounts: &mounts
- mountPath: /run/secrets/invoiceninja
name: secrets
readOnly: true
- mountPath: /start.sh
name: init
subPath: start.sh
- mountPath: /tmp
name: tmp
subPath: tmp
- mountPath: /var/www/app/public
name: data
subPath: public
- mountPath: /var/www/app/public/storage
name: data
subPath: storage-public
- mountPath: /var/www/app/storage
name: data
subPath: storage
- mountPath: /var/www/app/storage/logs
name: tmp
subPath: logs
- name: nginx
image: docker.io/library/nginx:1
ports:
- containerPort: 8000
name: http
readinessProbe: &probe
httpGet:
port: 8000
path: /health
periodSeconds: 60
startupProbe:
<<: *probe
periodSeconds: 1
failureThreshold: 30
securityContext:
readOnlyRootFilesystem: true
runAsUser: 101
runAsGroup: 101
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
name: nginx-conf
subPath: nginx.conf
readOnly: true
- mountPath: /run/nginx
name: run
subPath: nginx
- mountPath: /var/cache/nginx
name: nginx-cache
- mountPath: /var/www/app/public
name: data
subPath: public
readOnly: true
- mountPath: /var/www/app/public/storage
name: data
subPath: storage-public
readOnly: true
- name: cron
image: *image
command:
- sh
- -c
- |
cleanup() { kill -TERM $!; exit; }
trap cleanup TERM
while sleep 60; do php artisan schedule:run; done
env: *env
envFrom: *envFrom
securityContext:
readOnlyRootFilesystem: true
volumeMounts: *mounts
enableServiceLinks: false
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- invoice-ninja-db
securityContext:
runAsNonRoot: True
fsGroup: 1500
fsGroupChangePolicy: OnRootMismatch
seccompProfile:
type: RuntimeDefault
volumes:
- name: app
emptyDir: {}
- name: data
persistentVolumeClaim:
claimName: invoice-ninja
- name: init
configMap:
name: invoice-ninja-init
defaultMode: 0755
- name: nginx-cache
emptyDir: {}
- name: nginx-conf
configMap:
name: nginx
- name: run
emptyDir:
medium: Memory
- name: secrets
secret:
secretName: invoice-ninja
- name: tmp
emptyDir: {}

View File

@@ -0,0 +1,31 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: invoice-ninja
labels:
- pairs:
app.kubernetes.io/instance: invoice-ninja
includeSelectors: false
resources:
- namespace.yaml
- secrets.yaml
- network-policy.yaml
- mariadb.yaml
- invoice-ninja.yaml
- ingress.yaml
configMapGenerator:
- name: invoice-ninja-init
files:
- init.sh
- start.sh
- name: invoice-ninja
envs:
- invoice-ninja.env
- name: nginx
files:
- nginx.conf

111
invoice-ninja/mariadb.yaml Normal file
View File

@@ -0,0 +1,111 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: invoice-ninja-db
labels:
app.kubernetes.io/name: invoice-ninja-db
app.kubernetes.io/component: mysql
app.kubernetes.io/part-of: invoice-ninja
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: invoice-ninja-db
labels:
app.kubernetes.io/name: invoice-ninja-db
app.kubernetes.io/component: mysql
app.kubernetes.io/part-of: invoice-ninja
spec:
ports:
- port: 3306
targetPort: mysql
selector:
app.kubernetes.io/name: invoice-ninja-db
app.kubernetes.io/component: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: invoice-ninja-db
labels:
app.kubernetes.io/name: invoice-ninja-db
app.kubernetes.io/component: mysql
app.kubernetes.io/part-of: invoice-ninja
spec:
serviceName: invoice-ninja-db
selector:
matchLabels:
app.kubernetes.io/name: invoice-ninja-db
app.kubernetes.io/component: mysql
template:
metadata:
labels:
app.kubernetes.io/name: invoice-ninja-db
app.kubernetes.io/component: mysql
app.kubernetes.io/part-of: invoice-ninja
spec:
containers:
- name: mariadb
image: docker.io/library/mariadb:10.11.6
env:
- name: MARIADB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root
key: password
- name: MARIADB_DATABASE
value: ninja
- name: MARIADB_USER
value: ninja
- name: MARIADB_PASSWORD
valueFrom:
secretKeyRef:
name: invoice-ninja
key: db.password
ports:
- containerPort: 3306
name: mysql
readinessProbe: &probe
tcpSocket:
port: mysql
periodSeconds: 60
startupProbe:
<<: *probe
periodSeconds: 1
failureThreshold: 60
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /run/mysqld
name: run
subPath: mysqld
- mountPath: /tmp
name: tmp
subPath: tmp
- mountPath: /var/lib/mysql
name: data
subPath: mysql
enableServiceLinks: false
securityContext:
runAsNonRoot: true
runAsUser: 3306
runAsGroup: 3306
fsGroup: 3306
volumes:
- name: data
persistentVolumeClaim:
claimName: invoice-ninja-db
- name: run
emptyDir:
medium: Memory
- name: tmp
emptyDir: {}

View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: Namespace
metadata:
name: invoice-ninja
labels:
app.kubernetes.io/name: invoice-ninja
app.kubernetes.io/component: invoice-ninja

View File

@@ -0,0 +1,46 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: invoice-ninja
labels:
app.kubernetes.io/name: invoice-ninja
app.kubernetes.io/component: invoice-ninja
spec:
egress:
- to:
- podSelector:
matchLabels:
app.kubernetes.io/part-of: invoice-ninja
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
- to:
- ipBlock:
cidr: 172.30.0.12/32
ports:
- port: 25
- to:
- ipBlock:
cidr: 172.30.0.160/28
ports:
- port: 80
- port: 443
- to:
- ipBlock:
cidr: 172.30.0.1/32
ports:
- port: 3128
podSelector:
matchLabels:
app.kubernetes.io/component: invoice-ninja
policyTypes:
- Egress

70
invoice-ninja/nginx.conf Normal file
View File

@@ -0,0 +1,70 @@
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /run/nginx/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
gzip on;
keepalive_timeout 65;
upstream backend {
server 127.0.0.1:9000;
}
server {
listen 8000 default;
server_name _;
root /var/www/app/public;
index index.php;
charset utf-8;
client_max_body_size 0;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location /health {
return 200 'UP';
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass backend;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors on;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
}
location ~ /\.ht {
deny all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}

View File

@@ -0,0 +1,32 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: mysql-root
namespace: invoice-ninja
labels:
app.kubernetes.io/name: mysql-root
app.kubernetes.io/component: mysql
app.kubernetes.io/part-of: invoice-ninja
spec:
encryptedData:
password: AgCWJhpMd/GmSzYZv+lofE9vQrTBewpeUO7rPnZGy5n9lvwwSin3DSzqeUCh37byCQ086VjIA1AqcJAXkur8dcZWXRAXY3H26rDoEMjGIyfrUEByCLhSNhL3sK7AcE14QWOuoxtUSbGk5RmYc+qvIw8b4l/dNpEnatLCRUeF9CefMgnTk2phVMlzkasvXjxAvxcBIvDg7DLcBOsenGg1xNG8j8wQ8flGsX6bWHmlt1+EBhyp+8PS+GyOT1BmjnVyQeo2mKwXm+FY9WHlEswypKTVQAsV6F0fUh9gIFoAdklOMwxbaW8321xLfQQvB4Qkbx8N0YJYy1jFNMF6plwcZhE7KwxXoNjW3GQhyGqTq/iFDi/oLJmAjxH9Vz8RPGT5IyOLRIkrQjCDhWrIHAEh1TUVF2BorrV8gIQOLV2xP2Lxa20KIjVZdosntWPc8bp8Br4RiP0JIK/ktRIMt+cCOwwrux8FhJe8WklujnaiZ1HX7G8dgidtjmUXYBxyNOZ9FMs2+c7D3bgqNQsTQ/NMlyP02l5oXUNzQpIVNbY4t+AT0ISn8NP9xDmLVwFw0Y3lJbx5rDtqaSFivkMOsp20l/JVUkeyig3Trm6OLh9FzI6Qr4Qo6fPBSrqKu1ieQPF76C80phrTWwtiK67i2LSmtb2zAvm3Hwj4X4Ag7HIi8F7zF7HjgOcmmS+6fIgyaIufE6IeQtwFwekbWGTHWDFddias9qHBuM1QcnQP/SJZkZrR/A==
template:
metadata:
name: mysql-root
namespace: invoice-ninja
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: invoice-ninja
namespace: invoice-ninja
labels:
app.kubernetes.io/name: invoice-ninja
app.kubernetes.io/component: invoice-ninja
app.kubernetes.io/part-of: invoice-ninja
spec:
encryptedData:
app.key: AgA1p4ay2avNqkFYzLIB5VXxIjFBoZ8YZksf89Q81vCXrUs9mxd2/PuaRFeVu9WZwYXElT/esF2TV8P0JfXI3OJrRqDTSDnPwphTTxDYokFTvK01kCMwjaGWnINX02aFL+N5rCpETBw2Ptojq3SRltbIG5xe9e1sfe0ltWU2YiwQ7Zp9ekQKV2A5TZjZ55rtjO6C3HqBYWz/AnxiizWGgBAIxHuM4D0mPHOK4u/7tnfC3CIaD8UqINE1Dz3qfeDDgf0rzgVq+b6pbaoEquCkvvbng2rFfY/MVq3tb3gFGR6G1qWA+XKPJwBm9ODWcHxAliqzMsvja26izwtK8ci3VBmaL6mWcyuaWcfA4Wbjo2sb7srcS9COjUvuZf6NiTqehBpplUHkqoyFq9+QO2zfVUZ0PMltEEuTmoG6K0PSnzROUjim5FavnVCzKlvLv8ChG/GE3sYMWxBFfJCpnXhfPkNyghok+WGiTqc4pBIy3KnqbCs1xxZhfJ3UVvd1Rkq1xEW3VnEJSg76EDerj5J526Q6V6i4dbXHnxeeoHLUfIGapIa4Pv63DwxSaH72gJXrtrT4X6XgQjEuZofLvm2q8QToTxSezT4Fc76ojVOJF3ssPJMwC7mx8hcYDjy1cWkw5pNvLgj5QDAspAgKSMuoe0YQU2ES7oOtTHZ7peSAGz+8zoJNoy1gQCGH95uztt/tkCyYPl3JaYF0A9j8oOWxAfJJMVBcxszt9EqD+j246JofJM5Gnn5SpgrIyWngpiK3G2xiM68=
db.password: AgCtaz+QGcnKsKonIsAmT9oiZHd9cmj4ntKZ2vhH4cwDzCw0mHu3s1NKGTFxqVkrxyn0S2PbM+6gSXFyz6FtxI+nhb1gP6+QbSLmbJk35+sdC90WYj51r+k1tjugGaw8RpdAACxHSe7Vf8S0fPS5JFqrLR5HzmthoqNwzChcXjALCkArSXEG2kuQj0Dx72NTStYOQCPth0pPFytako3gHlSHFgGrjQ/g/hOnrP/booIFl4GMAZnJ5CgwI4XQKP4VvyK8msF93T278pyFr7fFFVSLrYrzpqFYfpKrHdiwirooed52Xlwpy6tfFsD64kZ0hDd5xbzXStNxDBkPOOgEu+KSbqUuGu5s/TmqOhxD394RU3AcwiFiQ5nASldeTmzVqC1et5Wx2IuD1b0hVcqGTNh/6uaZRSSchM7enja1v++nd9W7eYkCLdzxUjMC5+GDC/MwNYrrPoIOZAjOLii2UTH0WmvvTu8R79wRmgqCzLykS2VQWaBcMlVQsbyj/IjBbAhTwZ1bu0HKwDQbWckCFQTixR1k612U3gK8P/TsqspSkip9WtlaR3eSwrjqImTe4fhdLI8B6oEYm/D6h4ciXthkl2uYtyd3gwMf3TsHlrev+aOV0K98oaAPkV4EkbDTSQfZDEvAlFwoLScPHBIUahWKPADES7O37cFwkYPo8JOC2yLjYGOlWh997EUsnB/rk2cIdlbVaS8HIWwO1QdPWHelOgBYo1lcWesxRswB1SM7bQ==

11
invoice-ninja/start.sh Normal file
View File

@@ -0,0 +1,11 @@
#!/bin/sh
set -e
# The Invoice Ninja logo on PDF invoices is always loaded from upstream's
# server, despite the APP_URL setting.
sed -i \
-e 's@invoicing.co/images/new_logo.png@invoiceninja.pyrocufflink.blue/images/logo.png@' \
/var/www/app/app/Utils/HtmlEngine.php
exec /usr/local/bin/docker-entrypoint supervisord

1
jenkins/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
iscsi-chap.yaml

View File

@@ -29,3 +29,11 @@ Clouds*:
[0]: https://plugins.jenkins.io/kubernetes/
## iSCSI Persistent Volume
Because of the large size of the Jenkins volume, it does not work well managed
by Longhorn. Instead, we use a pre-provisioned iSCSI volume on the Synology
NAS. This improves performance and avoids keeping multiple replicas of the
Jenkins data, while still benefiting from snapshots, etc.

View File

@@ -1,25 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: jenkins-snapshot-hook
namespace: jenkins
annotations:
argocd.argoproj.io/hook: PreSync
argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
spec:
template:
metadata:
labels:
app.kubernetes.io/name: jenkins-snapshot-hook
spec:
containers:
- name: jenkins-snapshot
image: docker.io/curlimages/curl
command:
- curl
- http://longhorn-frontend.longhorn-system/v1/volumes/pvc-4d42f4d3-2f9d-4edd-b82c-b51a385a3276?action=snapshotCreate
- -H
- 'Content-Type: application/json'
- -d
- '{}'
restartPolicy: Never

View File

@@ -0,0 +1,51 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins2
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
storageClassName: ''
volumeName: jenkins
resources:
requests:
storage: 40G
---
apiVersion: v1
kind: Pod
metadata:
name: migrate
namespace: jenkins
spec:
nodeSelector:
kubernetes.io/arch: amd64
containers:
- image: git.pyrocufflink.net/containerimages/dch-debug
name: migrate
command:
- rsync
args:
- -aiHAXS
- /mnt/jenkins/
- /mnt/jenkins2/
- --exclude=lost+found
securityContext:
runAsUser: 0
runAsGroup: 0
seLinuxOptions:
level: s0:c525,c600
volumeMounts:
- mountPath: /mnt/jenkins
name: jenkins
- mountPath: /mnt/jenkins2
name: jenkins2
restartPolicy: Never
volumes:
- name: jenkins
persistentVolumeClaim:
claimName: jenkins
- name: jenkins2
persistentVolumeClaim:
claimName: jenkins2

57
jenkins/iscsi.yaml Normal file
View File

@@ -0,0 +1,57 @@
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: iscsi-chap
namespace: jenkins
annotations:
app.kubernetes.io/name: iscsi-chap
app.kubernetes.io/component: jenkins
app.kubernetes.io/part-of: jenkins
spec:
encryptedData:
node.session.auth.password: AgAR1jsfJ0/jzQBwBXhbes8xI30qGjCtI20Zny1cf4vh39xdS28PGok2B9VEMFaZwit8PKCVecPo+Xfc/KBQCx57kfkRjfOEbSr32sYsT/rdtldYQwLYuDZ9hT9tto4cXFcMSKWQwPMdCuqF0vn4M2mhCcs0KyMNpemGqkPux0maAa6wgKNNGgNitg/EymDVhZBYflQxA8E+JXVrdvlj6wmRr5WW/3Xx/yWUlGQfiZeihORm/Ab+CL2p99LpGZVLitiL3tsMly19/ibt0OU6pkaKnL9Rb7EBxcdpYdRRVDbKyuyGRPyX1vsTM4u5IpX2HmXW4jRJQxpwnzQ2dcthQyKIh7IkezeiFOeHh+AOfo3lmF2nHOMFZmb+848G02+3qYDnGBzMTaZ/gWjjtR9ronlCSCH1drUQ7YIOWsW3anqKJwZs+oqbZddA9hW8ya6y4cRxcKqloFQteXI4EIBuJii2BRCsvg6zHExARZhHZMf6B3SEW9UjRDDJHVOiFg8tJP2UAsLm6yOsYUDE1Ld8JeLz7NvyPA/M4UtuGyI8nNDlv83nPZOyYq/h9gRHp4TG7Qo4YZDFRMdV1soz51WI1wUOzXRZD8Tia5CleDxN9fiyLpVnC8Z38AhIo4yVByjjTIV471a67ta2U0zoHQ/gqxrq8G+bkrP55ygXCiDybVOJrcS1jPO5UUtRa5H5GBhbQFQ5Q5X9eRQ+Qmqm12ScRYD4
node.session.auth.password_in: AgA+2CglEAZH7NQvhzZWWFs+IvZsVNCy58BX/5PLTIgyaFLlXUW7tyoA49CcnTAtYqxcTLSvqcRpstjGtl5Eq+y08mzC0VSAytiRWMJ9fZwq+h4eQEfabPFtMNVZOVMm+0c8NADWD8PLkIWb6yp6QbGv4uN+Abo+uWXgTPHCw+8TOMcYxs5RcSPjkg3jvCJuCZi8IuTmUMzCiCrpZMSTNZaGh4jD0tJjrBWFwvRnkFlgy2skMNY0LoPZ4ZmYp+KGE1IK/Soom22xOwG7NdKDMHYD4hZrflAqLBKcEb0AiM12j3v4UoQSUfZ4+KcTZvtgSf127HaivS34w8payZANh9izzzxZlwA2wc3GacCFzQrpDIsRI9QgDrxGDNFSwBLzFjmWMD/eWLsY6xgZnS5Q5wHiCW10t+13KHvhl59ovf6UEjtCwH14dfMn/qMu+Sd8tqrZGV5dSHLyFD5PufGkuMw3G6YTsTRU5APdShbnhs8StLccEI6drYjotn7g7xDyzlHEuswc8kD//W8PDyXTBenSgLHeu61Ud1O843KjVcPxdlIFaRGX1EsQUj+zXm8v1A4Ixfm07JRn03FfurJy/NKhKsqvrrafXGNXTh3CLlHObdk3Uqj9IxbDUqzcuZ8+sL9Ia/NW9gN7CRUa1ARfMbrZ/uzNSEulo8vqS3M3DKk+xNm+ivqt4ZEVDOYrHwBGM/HOxDr5TOvck/SSVax8dv3y
node.session.auth.username: AgAFfDEVU9BJF085N3V64AP6ZU3ImY7gIsuqqfEbrTOekz9jyvQkrsA1mTNUmfnvD6oJJGv2XYGjxzPsiw//YZSUwmXLgYMBzHYWRl4yFvDmNUo3W3au9rUUha1WxBQ/s7V8Qz0ucoaYhpaMEXZ39OguLzpwYpiCAiBLn10JlGAWFyPfRRYl5Ybt/WpdHV/b0pog6HKO64xWhIu+jxa4KwszfYbYQ0HNnHHsGZpGukMwFLs/RHb+G/ob/5jjhW7wqf1qfuotNJ6scgIHTv+N8Zd057Yfg+syNv4GBECTjH7xXAKKDUJ31pORRI7mDyuq7s24IqJHYwZEMxN2+Pa9l24f73hIa1ZJLS2FDTK4C6mWRvTFwlYm4KSYp3OJgHW/+78wkAIjKOokn5BUKaArEZqUYiftjFu0js1WVZ1AF2ZasH0PdCqm8GHC9nAG39+ZKNgoVF2SC1In9a7/RtxV3VEG5D9o5E7FoZoLF6RXWfn3k1xeFYJ+RZwcLKw9hIe8lrEOZzA1ikGLijUShKZrT1wH4Hh+lMMDqld0kga406BtEJaBCIoNGdvtDLs1IxO8znE+EDIjd8bQwbEBzzigeIrL5B9+F7jeDqD+4wxCs76cJzvrj2AKRR3RtCs8t0nDrfOmE2X7wOGZXjypBmAYGeNww0Y8Y5CeqIfdHW/x1Pe8zR9PMzoE3OFHstepHj8451SpVAKsQj+9
node.session.auth.username_in: AgBREdELfPLFznIgcaB8kaOTef3MSDz7FsuHBZ+PJNR8mOmbFeo6K+D6Kx+tPHXd9sLect9q5gPYHqRtJLHcvEL/YU566nUodn+DDudlLrjJB5x7kf8dXkYXzUpEznq/nF6OLNZjEntjb5FqtPRJQY9JV+zTF9hSGJiC++7rDHrBTeBWXu/I+czoOu6Zx1N5r6f9DMTKkr9YjRBawPVfo4ySR5Zq2nZAwI0/HwAaBaOgVdpxpLSlZ29W761Gd43kxz11ngctRWR8BjPFhVI8JOv7zaTex65gpDB7YPAd1tZobVxtKUijrRa45N6j6VqFYq3+t1vVFjFPoGM/DUKqd9XClSUkdENlGZB+71UVoR6qLiNRQejA6LNN/ZycGQlTDRYr1wljTYeUu//x/ZouPKWSD0jAh91Z9qL0wh+L5gghOSS2vkqU+Vsbtl3E9PZ+yHK3jE9+fuYBmrHdUWovNpQMkk+9Rdhhv6oET2zzbl9BwQgnuF1LCO+GLmEhYxfUlwR3Ki9i1+PuvBx2NJyr2ukRf1uiH5WoQDs2eJEEmM1LtzYf+mzWXR752zpM8GaMAA6CcWbIw5XpuCaAEfOalgHnwIPJE2pK2AZv4n8hYGqWhYqQizVHHLm1dyllzYiB2kaJwUOlTlfWzQ9mXrME7KCbMhuMVvGTG6bn87ER1C9w+bCmk93J1xG6EFwhG6fU3n8zpNZ1AzNEAA==
template:
metadata:
name: iscsi-chap
namespace: jenkins
annotations:
app.kubernetes.io/name: iscsi-chap
app.kubernetes.io/component: jenkins
app.kubernetes.io/part-of: jenkins
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
storageClassName: ''
capacity:
storage: 40G
iscsi:
# Has to be an IP address, even though the documentation says it can be a
# hostname. Otherwise, error: "Could not get SCSI host number for
# portal"
targetPortal: '[fd68:c2d2:500e:3ea3:8d42:e33e:264b:7c30]:3260'
iqn: iqn.2000-01.com.synology:storage0.jenkins.8181625090
lun: 1
# Synology does not require CHAP for discovery/send_targets
chapAuthDiscovery: false
chapAuthSession: true
fsType: ext4
secretRef:
name: iscsi-chap
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: network.du5t1n.me/storage
operator: In
values:
- 'true'

View File

@@ -162,7 +162,7 @@ spec:
spec:
containers:
- name: jenkins
image: docker.io/jenkins/jenkins:2.414.3-lts
image: docker.io/jenkins/jenkins:2.426.2-lts
imagePullPolicy: IfNotPresent
ports:
- name: http

View File

@@ -7,8 +7,8 @@ labels:
resources:
- jenkins.yaml
- argocd-sync-hook.yaml
- secrets.yaml
- iscsi.yaml
configMapGenerator:
- name: ssh-known-hosts
@@ -17,3 +17,14 @@ configMapGenerator:
- ssh_known_hosts
options:
disableNameSuffixHash: true
patches:
- patch: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins
namespace: jenkins
spec:
volumeName: jenkins
storageClassName: ''

View File

@@ -0,0 +1,8 @@
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBhbVc0QnVhOG94QmRNZGpN
TEpQZnVlZ0ZhbzFqMUpMOU0wdnNzbE05c2dVCnliS0I1WmhKY2EwWFVabEtOYi9G
MVJGNkVFTDdBc0RyUXJGTmpqcXZBb00KLS0tIHN4ejFzQWlka2d4QzdieW5FSzgr
RTBQdzVjVUVwVHlGU3RqaWtuZ2VyQ2cKofjXsYyJO80H4QK54Sjlpde03n6mpmKU
3TzgMzdGPFGwmvDLjxrnAAu068zbeIop3Fh419VR07U0h2qzSZDUzJv2F3fAgB6B
WjkNYDgZ9xAjIKsh2SN7h/M7GOsKaD+cW1kR3ZFGQnTSyYQ=
-----END AGE ENCRYPTED FILE-----

View File

@@ -0,0 +1,8 @@
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB4YkNQM05YWXlnUVphenN1
eElvdGlpWDFQUjRKYkFrTngvNTgzZHhUTlJnClo5MkpEZW1GZkI4d3paM2tZSlZU
cXBDT2hFZThaenBhYktDbkIzZGZJUEUKLS0tIDA4TDJ3VWI0cC92NTZZemZpZUM2
cklIb01jM05wZlBTczg2MGhESUtTTVEKO7mBlUZ7CIDvyXlr89R779AEhCn7i/XJ
aarzlaxKNdCecEgcvcVtmpNcmh3J+C9WjwqFCFjJ9LPkj6x6Aqm/RyGSThBeyNDt
YAlMtV24Vewqa1jBFwkVV9VPl0QjfjcQ4niYdJ11Qrd1SqU=
-----END AGE ENCRYPTED FILE-----

View File

@@ -0,0 +1,8 @@
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB6ZXZDWnpzcXgvcFdtNnRE
Z3RqRXFMY013ekNyUi9nS2lTbUZyd2NIVHpzCnFGMVk4MXpiVlhyTHBWcWZqeWI0
VnVzc2ZzTXVOWmttdnNPblR3YzRna1UKLS0tIFlJK2c0dEV3UVpRRnFtZm9CdFMv
V0xOU0FNd2ZwemMzamZLM1VJbFJGdHcKCSvZFqk9Kya6hTM3n8cZ5DzL2+PH04ZP
ieVpAgT/K7vW4iFlIj2m6FBOIpfxr2IEgogUD7Kznzji5G+WpiScnnuOGus9DKhG
yplTob4ADxM1UZuGVMEsfCQSs1YXVw/R+ewrVJ9vGr/1CGc=
-----END AGE ENCRYPTED FILE-----

View File

@@ -19,6 +19,9 @@ burp1.pyrocufflink.blue:
gw1.pyrocufflink.blue:
- age1dcyvkqde4j43gz6pzk6u8g3ph85tj3qr0tucr9lkcy4sgyqshe8qzq7d20
loki0.pyrocufflink.blue:
- age15pgrrmnkvyustmtlhj4v9u5h86mltmjxdtelpzhffyj3qyeg73rqpt9z2d
nut0.pyrocufflink.blue:
- age1c6swn9tm0502jd3e0yszfd4qd7lgx2nd9uk0hruuckhx7zpn3utqhau7mz
- age1fc96yyd7a7l3uc4jr8sk06h8al607gjxd89q435jlp6nsmrhqflq5dkhtq
@@ -29,10 +32,18 @@ nut0.pyrocufflink.blue:
- age1kfqgu0ug40uwrsqx94azeflg58wp4ckx3xsm5l2y6zvw95zqygfsy8x69t
- age1xfmmwhutwr4cml4dlj6rq6r9mgjs3fake0q4wuly5z9r9mqgk4nsk53d5j
- age1y5cdw7xct9f50yurw7h5flck8jycv0t4m4qj72frep3z09344pus9x4nkc
- age1skhy92fp4kw7zzz63uunk9mhlvld2rf7s7nzecl0326drcdzjdjq7rcfze
nvr1.pyrocufflink.blue:
- age1668cmw7jeyfawpdp7c6c79hdqdmvzjrkuszz4c96sfugkyjsr39qv4vsg7
nvr2.pyrocufflink.blue:
- age15dkzhzhu5lh9va8u60fevuuc5q3tu9n7clz092m4gmvytkwnsf9qhcuked
- age1skhy92fp4kw7zzz63uunk9mhlvld2rf7s7nzecl0326drcdzjdjq7rcfze
unifi2.pyrocufflink.blue:
- age1lu2z3flgg77f39mkklqrpacjk5qsdwf9fyqmhn5ljc2sdef0vg2qvqp7ef
vmhost0.pyrocufflink.blue:
- age1y3hea7a4rpeyjhcrcg29lsfzg9guwqeqx6m6q6szt5wuc8guy3hsl6t33e

View File

@@ -44,6 +44,10 @@ secretGenerator:
- age-keys/age1kfqgu0ug40uwrsqx94azeflg58wp4ckx3xsm5l2y6zvw95zqygfsy8x69t
- age-keys/age1xfmmwhutwr4cml4dlj6rq6r9mgjs3fake0q4wuly5z9r9mqgk4nsk53d5j
- age-keys/age1y5cdw7xct9f50yurw7h5flck8jycv0t4m4qj72frep3z09344pus9x4nkc
- age-keys/age15pgrrmnkvyustmtlhj4v9u5h86mltmjxdtelpzhffyj3qyeg73rqpt9z2d
- age-keys/age15dkzhzhu5lh9va8u60fevuuc5q3tu9n7clz092m4gmvytkwnsf9qhcuked
- age-keys/age1skhy92fp4kw7zzz63uunk9mhlvld2rf7s7nzecl0326drcdzjdjq7rcfze
- age-keys/age1lu2z3flgg77f39mkklqrpacjk5qsdwf9fyqmhn5ljc2sdef0vg2qvqp7ef
options:
disableNameSuffixHash: true
labels:

View File

@@ -38,24 +38,25 @@ spec:
env:
- name: TZ
value: America/Chicago
- name: SSL_CERT_FILE
value: /usr/lib/python3.10/site-packages/certifi/cacert.pem
imagePullPolicy: Always
ports:
- containerPort: 8000
name: http
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- name: config
mountPath: /kitchen.yaml
subPath: config.yaml
readOnly: true
- name: tzinfo
mountPath: /usr/share/zoneinfo
readOnly: true
securityContext:
runAsNonRoot: true
runAsUser: 17402
runAsGroup: 17402
volumes:
- name: config
configMap:
name: kitchen
secret:
secretName: kitchen
optional: true
- name: tzinfo
hostPath:

View File

@@ -12,3 +12,76 @@ spec:
name: imagepull-gitea
namespace: kitchen
type: kubernetes.io/dockerconfigjson
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: kitchen
namespace: kitchen
spec:
encryptedData:
homeassistant.token: AgBkwchVoL92wCgulLOyoGhafLj7Vz4Ix5dSYBFIDCunHK30qtsoXqS/EL4k8zjU5+eYjJxju+ysj56Ayz/wvT1g72dm+09ijKs2yXWmUiJLAnKCtOZ1s0q5404Pm/D4/aRklh38kqPDFsAsbVExxMsFrkcA5g2EcZPg6yz2jGiavTIyGRX9uahFCAuF6a/BMifNUeTFBwrP1s9fonBwe4BUV4eYzUPE85FewPgCCnT0Gztyxbt+1/Y/HKhSMHxifpWLBWO8OOKo27urBe7Jir/Dblemg5rPOrj2pIggUg0Rym3ZRHSZyZQVvF4Zq30b7PU+zeYqqcJZsg2ZOPna7PeFriMqP57BMZ39gLCnUU22MQM6CeIKuCYl5tzE5Ygd5SK4lf1avsQUj3LuFMUk3OJaSKAdX+4y4pQbyDV+ppukL+ziaQYKIIWPUDESFyNEswoPKqjk4jjJh7peZuGUs7t599fHzYgZPTeSqwedY+0Eal23Q11Q5TxvHU0hgJEECEC9RZRuJMHAKHVkgXYlco/n4Ka8dYXbgYAjflRhH995n9EY05hRBDW/lYwTxaM6nTK2VjzvCnkFVWq4HdJ/mfwqeRY5mxdTrKr0Xgg7lVP/xTsMy5/aHTCQR3hpAZoUK4W4nqTYlQv2RACETMZwQP/tF/lTcC+59beETlUO6pOvqp3IOq1Ah6NbUzM73hszKn3o3lAlVQG0FO2WT0OHrrbPPsiZX/8Y5mzpROwLKKrOMcEr3Bq5rQIsI7kqi1fS1mFIb+W5P7sbKjQlzit0xNo9HYs7dc/0yCdDRZK5KXe+mFC4CttWFiO+UoLsidBnRI/+wFXT1Y8aO1ad1QZC63hCCKao3Oty4q0B5EBBT4g2mygJixfz+L4Pw3eeV+nLp1x2vBahfevuDK3KfJji5g5pOFoxPmp/qKGJO7pq2juntgSGshe35qA=
nextcloud.password: AgAofzy2WrVN7ekNwxOCZT1599OIsO2hBJtBqVRPG2v9gpqxkj/UKPF4rKs73rIp5la4yeb7fRKJdL40m4synOIwQAmVTn6LW9emG4r+6GWHfqvhXhcGgGcUKl0oRn22+5e3QQLpta+qO8GFcEysNCQTSkzQsV87oU38p2/LiRbzlu8FiY63HYw2mGN1wiH91oDpdcS3Wfpeb5O9fQp+ab6zlngIXXdHTjxfusQ+YPFf2u8887FyKLxPC5ktgVFVx7rYAEzWajG8VzMqsTE/YWeQrkyKBRFGrwXFpFUDYwOczZiCmaPo5+qxeeUKCT8m+8D/+kiIRQqrzoulOmdQ5KT70WndkuAuEVNIzSXJzNxXs/85CCNkGbGHrE/Rv/sK1yCpMHvNNaJriNeamDAgPYV/fM3Ag6nNmQLwYZLshij+BXUng2p2/qP3YFIpDpKshLJeOMAWpHg2VUZZPOkj3UFvFItdKYtRKw2p4XNg2dUrQZUgZ8TE3MRcUaIgq00kd940htBEQc9FbF1C9XJpbV5Fe93tYYP8oMGauGSKviimZL7v4kGwJICMO2pYWDNx/D/WXkEu8o4W3K0gJ/koIJVJrd4/UME28eQCwgUa5/+a0kitynegOiDijIlNBRxnMdI9ojZ0g8K7MSrw1De0fZYpW7OxTHWtbhLT1z4sCuMxolOQfAk93dKVQgUWgDXtJUmnXhhvcEthW2a4Ez610w5P3mmkzkhBBfQ8h/VehxJ5kN5DHdWpkfmFvDcGkIb2
template:
metadata:
name: kitchen
namespace: kitchen
data:
config.yaml: |
__credentials: &credentials
username: kitchen
password: >-
{{ index . "nextcloud.password" }}
__calendars:
tabitha: &tabitha_work
<<: *credentials
calendar_url: >-
https://nextcloud.pyrocufflink.net/remote.php/dav/calendars/B53DE34E-D21F-46AA-B0F4-1EC0933AE220/7c565cd0-a8f1-4ea7-b022-3c1251233e91_shared_by_53070922-AC26-4920-83FD-74879F5ED3EE/
shared: &shared_calendar
<<: *credentials
calendar_url: >-
https://nextcloud.pyrocufflink.net/remote.php/dav/calendars/B53DE34E-D21F-46AA-B0F4-1EC0933AE220/shared_shared_by_332E433E-43B2-4E3D-A0A0-EB264C624707/
projects: &projects_calendar
<<: *credentials
calendar_url: >-
https://nextcloud.pyrocufflink.net/remote.php/dav/calendars/B53DE34E-D21F-46AA-B0F4-1EC0933AE220/projects_shared_by_332E433E-43B2-4E3D-A0A0-EB264C624707/
dtex: &dtex
calendar_url: >-
https://outlook.office365.com/owa/calendar/0f775a4f7bba4abe91d2684668b0b04f@dtexsystems.com/5f42742af8ae4f8daaa810e1efca6e9e8531195936760897056/S-1-8-960331003-2552388381-4206165038-1812416686/reachcalendar.ics
agenda:
calendars:
- *shared_calendar
- *tabitha_work
- *dtex
events: *shared_calendar
tasks: *shared_calendar
projects: *projects_calendar
mqtt:
host: mqtt.pyrocufflink.blue
port: 8883
tls: true
username: kitchen
password: kitchen
metrics:
url: http://vmselect.victoria-metrics:8481/select/1/prometheus
weather:
metrics:
temperature: >-
homeassistant_sensor_temperature_celsius{entity="sensor.outdoor_temperature"}
humidity: >-
homeassistant_sensor_humidity_percent{entity="sensor.outdoor_humidity"}
wind_speed: >-
homeassistant_sensor_unit_m_per_s{entity="sensor.wind_speed"}
pool: >-
homeassistant_sensor_temperature_celsius{entity="sensor.pool_sensor_temperature"}
homeassistant:
url: wss://homeassistant.pyrocufflink.blue/api/websocket
access_token: >-
{{ index . "homeassistant.token" }}

24
loki-ca/README.md Normal file
View File

@@ -0,0 +1,24 @@
# Private CA for Grafana Loki Client Authentication
## Generate CA Key/Certificate
```sh
openssl genpkey -algorithm ED25519 -out loki-ca.key
openssl req -new -config openssl.cnf -key loki-ca.key -x509 -out loki-ca.crt -days 3653
```
## Create SealedSecret
```sh
kubectl create secret tls -n cert-manager loki-ca --cert loki-ca.crt --key loki-ca.key --dry-run=client -o yaml | kubeseal -o yaml > secrets.yaml
```
_Note_: the SealedSecret is stored in the _cert-manager_ namespace since it is
used by a ClusterIssuer.
## Deploy
```sh
kubectl apply -f .
```

11
loki-ca/loki-ca.crt Normal file
View File

@@ -0,0 +1,11 @@
-----BEGIN CERTIFICATE-----
MIIBlDCCAUagAwIBAgIUGNZ/ASP8F2ytev3YplTk4jA5a2EwBQYDK2VwMEgxCzAJ
BgNVBAYTAlVTMRgwFgYDVQQKDA9EdXN0aW4gQy4gSGF0Y2gxDTALBgNVBAsMBExv
a2kxEDAOBgNVBAMMB0xva2kgQ0EwHhcNMjQwMjIwMTUwMTQxWhcNMzQwMjIwMTUw
MTQxWjBIMQswCQYDVQQGEwJVUzEYMBYGA1UECgwPRHVzdGluIEMuIEhhdGNoMQ0w
CwYDVQQLDARMb2tpMRAwDgYDVQQDDAdMb2tpIENBMCowBQYDK2VwAyEAnmMawEIo
WfzFaLgpSiaPD+DHg28NHknMFcs7XpyTM9CjQjBAMB0GA1UdDgQWBBTFth3c4S/f
y0BphQy9SucnKN2pLzASBgNVHRMBAf8ECDAGAQH/AgEAMAsGA1UdDwQEAwIBBjAF
BgMrZXADQQCn0JWERsXdJA4kMM45ZXhVgAciwLNQ8ikoucsJcbWBp7bSMjcMVi51
I+slotQvQES/vfqp/zZFNl7KKyeeQ0sD
-----END CERTIFICATE-----

13
loki-ca/loki-ca.yaml Normal file
View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Namespace
metadata:
name: loki-ca
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: loki-ca
spec:
ca:
secretName: loki-ca

17
loki-ca/openssl.cnf Normal file
View File

@@ -0,0 +1,17 @@
[req]
distinguished_name = root_ca_dn
prompt = no
default_md = sha512
x509_extensions = root_ca
string_mask = utf8only
[root_ca_dn]
countryName = US
organizationName = Dustin C. Hatch
organizationalUnitName = Loki
commonName = Loki CA
[root_ca]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true,pathlen:0
keyUsage = cRLSign, keyCertSign

15
loki-ca/secrets.yaml Normal file
View File

@@ -0,0 +1,15 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: loki-ca
namespace: cert-manager
spec:
encryptedData:
tls.crt: AgCAvqpGFq6pZYDF8DjUu0B9CrI9J0yLTgsjsbYeWVuIFU9ie/RsSkrccFdVg4od+Sd+NbIfw7eMA+ZiPIzqiW4IdMR4vXi1hmqfjl5n5MMRTPTfTfB/gOAVO3mNur2Fnutloo47ZueXA6JznMSqwPu6pAhypSIFwfgScF2EANkTBEWClxvF5U/RY4hlO6nbV56FPbGBvXE1ZCk5ejzLyY9ldz8rdt/T5VdioH82fX5rlzlGdKOyjYCOK81Flo59jwIRr5cLggd3/ddLn1xGaHORU/8PngWbHcyEI7lkByZ1cpFeG2kZ0DwceH9RLuhu1pS3FobYRGdt3QAGpGLQoNLJGsU5y43JwcyDA5qlQHhoszBRerDsi8vidNLiFD1X+85tGT2aInl4ysfL28w/koE6uo8GvUzp8d+NlK/P55MK4Xrtbu8NgSNKbBdaV0cLAxZpnyrJ5xqdkfmGEDxTltZ+XckMpEq2k0PrsrOfhCG9/xLN4vp0AISbxXFTEADhvz6ZCgxdndsnt3RW+CxAWGTslV9tY6GWSEWkG6P9/J5swetHTwRGJMWftUYHFY0C9rqZBJNKAInEDf3BA0fZelqMaQ2afZD+7upgO5Wst+hw6oGmVs555SoWSScIkdWTVQFBfvIatY/s0MH7v4LnSOEM61ZVPwXbL/GLKsVy6CcH/IOH7pKOlBK42mA9T0/QzRiA7ZAPj0yYslHhKPGqz90ojBv/ch3Y+142/IgrlWPr+xJvLYuU4AWRglthyFHjOpSkmFvH21MqzyRT+AxORJAsSepCCRz1hHx9KruC4M8iuQ9QEGZHsT+8IkWbWKo4AGllCEyF34jdMUuIYcon0e8+H3jPQvk4aM1X5d1V4ZfvqPswlcrxklGRMsT2wMEFpFXElHXwBgKTGD0FDTH7/DwMsioCmfz0hg/P5GsZ2ziNUqFC+/ngSWOPyuo6Jgt303tEbX8p972VMOtfFyqh2mR53jE2QeM9w0zgDKHxpAN6Pu5WLsj0N5VYBesLFsukEMUFlo+ltBvQO+AKv67E8B/LPPDbwViyEZFZoNPrffC9B2lhltVYfRtfuCjlPUMy5TVixugDxrqwA/6eQtIUOokSHb3GsDpdKjEKhs0KLqhkDU3A9BnrKhRbaOisMxPLuYUPMd8OVst515EZVwv+lKp74oY6UDk6bUAX34GkdVw7g6QO140WRqLkb4bjAjFcwsat1h06P5AcNDCrwAfnPFtxyicAnAi4Cq960j8+7bHnMdrY7aakKy5gIxlkwcHnUheg0wvh2TJH1juvDArVDh4TU9J76F3jKRL5YCJXq2Gg1NExOfUYKMpFE0MXjg2jemBwDpsczttdcs0zo0vnbYmS8wxhiawdnNuPtYZ0m/XxYlw9qocjfy8QCrs7eaK9F2BH0L5dIx894SPsv7nk3/jnENakcgpzU8aKrfCRfR+qEJfoQ7yg701qLfO709KUNjOWxa9r0QTxQS+TGVrvlqdEJ2lASlM6UOHqE7ZbmKHg
tls.key: AgACeNJFSJlXIzdU6DntcdHA1CZoSdi+CacD3pqMYJRyJDPlbFi06eaWNqPT08wIfh2QTternOewBt3jlsLtD3p5VF9GY1EtDNradEpK8POd2e44tMvQusA85rSM5iDwFKQDHswTcmxW/x/d5OnefydJnDaAifCycYYmvXtjJDrnQ1lJJ9oBxnS9y3mqTpQzrSNuVuC4JjpXzzCvx05CDFE6fDxkFwJoDWKPbaZD1wXfi0kbjAPlzANWzGHS/p/dSrMQvyCWiF/dVeMcXTCCgUyKfaZqDZCRgQh006d6+M4z0t2RHB3Jk59hPErhVOt8tHWHckuz3b2Ux/cisF89yl1zsh9WmNyCSRoArPet+lkx6GpS6/kJJ+z7qIHboJYEFA6+Vt+rG6knOIRGo7gnzc02URzGG0caaSorRUnD6sLteKkWHUccU9CFinbWQZloIfkKZMadIEQqhQJhcRAbN86tAUTntyVjSia4IXMRhGPtwJrdwZr57CCfkDkjSaxluWga9z5bxtoVIITYHaf1gHQ3J4YS8HCJdQFRtEjAqipm6BXYloujVE3dAHAb3l54ORW61lGLpJP6fKLLH6ZVJu65KulTdaokzuIzLY6xvoJtDKAlP1Y146OdowMWvlXitZ4kZLa0LT2jiN3FfaUrl4FnZEu1wC0vdu+nnbYLJ5WGHUnQLTQqHSMK/zW02W/cs2Tf8dbCvL8E5KL8dHPHHtC/8BH4f530pamEJDGdQuhMAZwJ1T8ohWHFG4XMT83pMqYChIlqlX8Yzd2RlNPlB1U0ROTftIYqx5Fd5DU4dxofydspBQaWbXLQ2fQF3k28zMNjSSZwyBL15nFf78hDGS3GeQ7W3YlpxA==
template:
metadata:
name: loki-ca
namespace: cert-manager
type: kubernetes.io/tls

View File

@@ -22,24 +22,6 @@ data:
exec /usr/local/bin/supervisord -c /etc/supervisord.conf --user paperless
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis
namespace: paperless-ngx
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/component: redis
app.kubernetes.io/instance: paperless-ngx
app.kubernetes.io/part-of: paperless-ngx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
@@ -180,7 +162,7 @@ spec:
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: redisdata
- name: data
mountPath: /data
subPath: data
- name: tmp
@@ -188,11 +170,24 @@ spec:
securityContext:
fsGroup: 1000
volumes:
- name: redisdata
persistentVolumeClaim:
claimName: redis
- name: tmp
emptyDir:
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/component: redis
app.kubernetes.io/part-of: paperless-ngx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
@@ -236,9 +231,13 @@ spec:
value: '*'
- name: PAPERLESS_ENABLE_HTTP_REMOTE_USER
value: '1'
- name: PAPERLESS_ENABLE_FLOWER
value: 'true'
ports:
- name: http
containerPort: 8000
- name: flower
containerPort: 5555
startupProbe:
httpGet:
port: 8000

111
promtail/config.yml Normal file
View File

@@ -0,0 +1,111 @@
server:
http_listen_port: 9080
grpc_listen_port: 0
enable_runtime_reload: true
clients:
- url: https://loki.pyrocufflink.blue/loki/api/v1/push
tls_config:
ca_file: /run/dch-ca/dch-root-ca.crt
positions:
filename: /var/lib/promtail/positions
scrape_configs:
- job_name: journal
journal:
json: false
labels:
job: systemd-journal
relabel_configs:
- source_labels:
- __journal__hostname
target_label: hostname
- source_labels:
- __journal__systemd_unit
target_label: unit
- source_labels:
- __journal_syslog_identifier
target_label: syslog_identifier
- source_labels:
- __journal_priority
target_label: priority
- source_labels:
- __journal_message_id
target_label: message_id
- source_labels:
- __journal__comm
target_label: command
- source_labels:
- __journal__transport
target_label: transport
- job_name: pods
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- cri: {}
relabel_configs:
# Magic label: tell Promtail to filter out pods that are not running locally
- source_labels: [__meta_kubernetes_pod_node_name]
target_label: __host__
- target_label: job
replacement: kubernetes-pods
# Build the log file path:
# /var/log/pods/{namespace}_{pod_name}_{pod_uid}/{container_name}/*.log
- source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_name
- __meta_kubernetes_pod_uid
separator: _
target_label: __path__
replacement: /var/log/pods/$1
- source_labels:
- __path__
- __meta_kubernetes_pod_container_name
separator: /
target_label: __path__
replacement: '$1/*.log'
- source_labels: [__meta_kubernetes_pod_node_name]
target_label: node_name
- source_labels: [__meta_kubernetes_namespace]
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
target_label: pod
- source_labels: [__meta_kubernetes_pod_container_name]
target_label: container
- source_labels: [__meta_kubernetes_pod_controller_name]
regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
action: replace
target_label: __tmp_controller_name
# Set `app` to the first non-empty label from
# - app.kubernetes.io/name
# - app
# If none present, use the pod controller (e.g. Deployment) name.
# Fall back to pod name if none found.
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_name
- __meta_kubernetes_pod_label_app
- __tmp_controller_name
- __meta_kubernetes_pod_name
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: app
# Set `instance` to the first non-empty label from
# - app.kubernetes.io/instance
# - instance
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_instance
- __meta_kubernetes_pod_label_instance
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: instance
# Set `component` to the first non-empty label from
# - app.kubernetes.io/component
# - component
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_component
- __meta_kubernetes_pod_label_component
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: component

View File

@@ -0,0 +1,41 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: promtail
labels:
- pairs:
app.kubernetes.io/instance: promtail
app.kubernetes.io/part-of: promtail
includeSelectors: false
resources:
- namespace.yaml
- promtail.yaml
- ../dch-root-ca
configMapGenerator:
- name: promtail
files:
- config.yml
patches:
- patch: |
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: promtail
spec:
template:
spec:
containers:
- name: promtail
volumeMounts:
- mountPath: /run/dch-ca
name: dch-ca
readOnly: true
volumes:
- name: dch-ca
configMap:
name: dch-root-ca
optional: true

6
promtail/namespace.yaml Normal file
View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: promtail
labels:
app.kubernetes.io/name: promtail

137
promtail/promtail.yaml Normal file
View File

@@ -0,0 +1,137 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: promtail
labels:
app.kubernetes.io/name: promtail
app.kubernetes.io/component: promtail
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: promtail
labels:
app.kubernetes.io/name: promtail
app.kubernetes.io/component: promtail
rules:
- apiGroups:
- ''
resources:
- pods
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: promtail
labels:
app.kubernetes.io/name: promtail
app.kubernetes.io/component: promtail
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: promtail
subjects:
- kind: ServiceAccount
name: promtail
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: promtail
labels:
app.kubernetes.io/name: promtail
app.kubernetes.io/component: promtail
spec:
selector:
matchLabels:
app.kubernetes.io/name: promtail
app.kubernetes.io/component: promtail
template:
metadata:
labels:
app.kubernetes.io/name: promtail
app.kubernetes.io/component: promtail
spec:
containers:
- name: promtail
image: docker.io/grafana/promtail:2.9.4
args:
- -config.file=/etc/promtail/config.yml
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 9080
name: http
readinessProbe: &probe
httpGet:
port: http
path: /ready
periodSeconds: 60
startupProbe:
<<: *probe
periodSeconds: 1
successThreshold: 1
failureThreshold: 30
timeoutSeconds: 1
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/machine-id
name: machine-id
readOnly: true
- mountPath: /etc/promtail
name: config
readOnly: true
- mountPath: /run/log
name: run-log
readOnly: true
- mountPath: /tmp
name: tmp
subPath: tmp
- mountPath: /var/lib/promtail
name: promtail
- mountPath: /var/log
name: var-log
readOnly: true
securityContext:
seLinuxOptions:
# confined containers do not have access to /var/log
type: spc_t
serviceAccountName: promtail
tolerations:
- effect: NoExecute
operator: Exists
- effect: NoSchedule
operator: Exists
volumes:
- name: config
configMap:
name: promtail
- name: machine-id
hostPath:
path: /etc/machine-id
type: File
- name: promtail
hostPath:
path: /var/lib/promtail
type: DirectoryOrCreate
- name: run-log
hostPath:
path: /run/log
type: Directory
- name: tmp
emptyDir: {}
- name: var-log
hostPath:
path: /var/log
type: Directory

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
labels:
- pairs:
app.kubernetes.io/component: rabbitmq-ca
app.kubernetes.io/instance: rabbitmq-ca
app.kubernetes.io/part-of: rabbitmq
resources:
- rabbitmq-ca.yaml
- secrets.yaml

View File

@@ -0,0 +1,15 @@
-----BEGIN CERTIFICATE-----
MIICazCCAc2gAwIBAgIUHOLoRkpqTumPczT4haPTrDR+NWYwCgYIKoZIzj0EAwQw
UDELMAkGA1UEBhMCVVMxGDAWBgNVBAoMD0R1c3RpbiBDLiBIYXRjaDERMA8GA1UE
CwwIUmFiYml0TVExFDASBgNVBAMMC1JhYmJpdE1RIENBMB4XDTI0MDcyMTE1MzQ1
NloXDTM0MDcyMjE1MzQ1NlowUDELMAkGA1UEBhMCVVMxGDAWBgNVBAoMD0R1c3Rp
biBDLiBIYXRjaDERMA8GA1UECwwIUmFiYml0TVExFDASBgNVBAMMC1JhYmJpdE1R
IENBMIGbMBAGByqGSM49AgEGBSuBBAAjA4GGAAQBUciaWKnxGTNnfkeTBFm4O8Qx
byOua3LYDBVvP04U6xxpm3k/f6m8PVpj8k57lXFtSAi4xpAgVy9gCzTnoud1YZEA
e4qSR4FG7M7mTygYLXkS6IheeRadWjRrjKvdtWr74gdsughnQ9dZjvE0lzqpFg0l
ncYN6FVsW4jo4tj+rayp1tajQjBAMB0GA1UdDgQWBBTTZi3xHWChlywYYs+QIlRh
96pcdDASBgNVHRMBAf8ECDAGAQH/AgEAMAsGA1UdDwQEAwIBBjAKBggqhkjOPQQD
BAOBiwAwgYcCQgDf4KpCADduVqdgeXp/eUoQEznKplgiZF8fdM+fVSEd+4t+IQZw
wi58uu2Ib5sPop0//iPT3AogIqmr+E1eu/EmAgJBY7naClR/IINeTTzUAqNjDxJa
GkQ7jJjpnGHNbnwLJ7e7VCP2rqDRtgw7z2QCxk3gIZSThXGicHPqxyiK9T9rjZI=
-----END CERTIFICATE-----

View File

@@ -0,0 +1,7 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: rabbitmq-ca
spec:
ca:
secretName: rabbitmq-ca

19
rabbitmq/ca/secrets.yaml Normal file
View File

@@ -0,0 +1,19 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: rabbitmq-ca
namespace: cert-manager
labels:
app.kubernetes.io/name: rabbitmq-ca
spec:
encryptedData:
tls.crt: AgBB9P7G4egqhKywdjQnfA9FbNdd1Nmn32RVo5aQpBf2TIOdpJ4r/vMSS+b80NLTt3KJ1eAim8RGOwCqJW/BKnLZhK95n2RnlqkmQysVFtemq0VXnmDnXVQ95TwFZQOxB2CzVFpXBWNFjrd+E3E9IPdU8XH35IuA4UiFp5YUeMcPKFnkmsNjOpu4axH4R9NDOAfiA+Eh0btGAT4+vqy78A0RImfPoRYSVN5EYulLD9xY9ShuivdJKZLIKIf69ma3Tzk1CBEO/wBgpy7E/XMQOptqq615u5KfxSknEXPeMH/leZatGRZF8YvUFl0Y9km2pCUBkYpzqCIDh9EMcEvB5+6Tmz52wpr73xTUlhjdVhJrha+VPk+Uut/49q5+ADB3v19JXvV+KIVdtV9LPPp14muyOrWSVKiH+CvoG0vfewnkR/rDQa5eHmCTkCv9PFtyeySsy5MkGE/ujo5jZP+w25fylaKnHepReyjfH5+xwkIHXiJ7DmZCsxayqdEEjjRacT/wrabP5MS9YEPi360uvPFr4P4navhZ4e8H66Ib7WscBc/HYynKv8Sirzc71IjGb07AGm1ivI1ddzbYZMBcifZlXh5R6C0sYePyWKlDzygWaFIvGPYWmnjD6PEg/wp94396xAsT3sh/+/Rv95hLcy4zHJYFQovW4zzxFNljgQVsJb+jzkEZlyYpiKwSqyaXVtMe5LBAhvNT9BRx0f1l1zrWE2PTWRtVH69kR3sTZ5Ur1JUN3e3weJqLhBV24BzcULdmyYHjc0qXcGMDMcd2FY3NYc4+EuFglmeu990j37WyY2LVr/XKkNdL8l4W7Q1DMGTyK6GiyNWsQolsBP9RviBsSbE2WsCGQUE51pjcSt8GVFMKYh7tyDT0iNd75L20YoyDlr8u7qlll2jUH6KhZmMz1zrC3MCkcnSAuWy1LK+Dm3ddcApEhVZAh9B7cY7eWcQUCPqW0jU3GhYyYt+F62rnZ8FcI38NTKEQqxVzJEaUsT8/AcCkq6jyYDM3YvDvo3zTOVTOzNQ1vc5ZIn8FDb9UDQcElo+neUyHnwUItHMLuIc44qMtdSGZsYAQzNjw3E5ZyRzEGpagglTTzjfvlGFJx57pheUtih2m0sbE4rtsYb79d/VhKMtRVVOiM5ChxcJK1Y7hJkpDqmzlMe06xRtuGf8VXl9VucWK5jZIC6rLdjwHOc3kwdXH+3+uVi2PdUbiGypJMK7iuRfe/FIAK+tDdcpVXXJn+okb1Oo36wZRapN8phFmvYOblZdVMl2BBljx778nXdA12NeKyIxLtdJ+86OQKskN6OpuLzLiCNl2lG5JpOzlwdtbOmmd0efQEwzFr9xd8XXb5Q8QImnoslBvhgmYLruFd1cPrcclZFQvCQgb1xN4uMtqCBMaVPJUaY0hXKLemzV/9bYLYP+3ES4jYAT/M/xyVnleWh+rJgMIHedEDWaAfZbbTYhHo7raBSecyl2opYKTTR7sjsVnACv2zB0LpOHVcWccwY5Zko4e21S/xjVq9Aff5tfwjV59g54oCun1HR6GHcuYgYYbqUt8BFbd09QaA1rgMAkNqaprJb7LXwt13Vm8sP6x6OcXt2YxZGlmPHsThcOFwV9SmUIyi8D+XEW/6FdFnUOhdnwNAQyagGT9A91Y5C3kz7Xh54Jdbgcg/Sc+8LQwlP86W35iCpsZjIiaQ/sRJcbQALAKXv/aKW2FxtXnXES2ynPf6RzpFzPv7xrLlva3AgYEm9rCD3LNl094tCoW4ZwxN45EKwk/7GGFpGS8+vhEfPfTe9RkHID5Hv76FeUr8Q+7l82QGCvZfvz4Ag5ZEp3sQpwfkvQFN94D8sfwSD87nmZVjQptJ0yu3yw4mhcMyVT5beMlMhtlUTG6Fq3hT0y0Leg8K63SHg==
tls.key: AgA6rWYBoogSLgfUQ82Lw++CvZSlhsUthdPtuMzrEAoCAYE55Vx5IvaEEKXkGLXcorYPFZSmVIlO35IM29F6u/DvHLQ/v4DXpcQJHXIejBM+zwynBXN/LGFIcBqj1JI1dZUYheb05nkD+qwiYhHCv4c6RSScX5osvPtXnq0AtCgNKNH4aRf3LQ4EUKakA8cVKmi8QC151L9pOWIrtkFdv28wSfW4viTkDhGornERcHZvPdkyG8gGQAy8B1Suy5LoZsfr8rFfWhYGOuVKNwM8RN3bBHVKhbCR5u6ap+ZzgPWdcWG88fRcXRY+YIgW2Q0Ffrk2TAVxgIbh/GuwYptwIKxj7cM3h11UqG57MpvcgE3rxhcwO5JbxPD2fAqxl98vkfIrasrhEpN3I3SHRrzxYKYt+6oYiK3H2xwIzBfMPIYfghyMLyf9H4f/zxRb945ehQrqYovduYdQR7ODsFYJmiGdMsITuPfq0Zl6KErDy+WILIY+eH5pkOym0A4te8jACbEALT6kcJ7buLfMZ65OHJDUzWf3W8Qi5WPkOXtDKkprzwjXotHdYcMInabE6rPjePb+uf9G9782WJQ7W5/ebJqEeL/FWFTQEurgrAt5v/8ugL8oF8LOyvt0dUboJKtDk/ZKgGEN4QQWQsuiBUX9qJxEgocbjnqw8/ZlzYy0AXdesv5nyspGYrIe2msrBbrrFMOQWAhyTdpXY+ZZldrH+qucUkbZZYBL1ItGOORg+dtcv3cXCfvL3cRUQrbZprjBGY4wqJB9CmfgCoLmxPBot6Lkedv1RrwaBB8KPpGi+hRtvI3rTeCuw0Ky7Q+qDFDsdoAZasRSPxQK3/9oY4gUplC2X5i3uC/jiNzbpA98IsmHKxDjcUk46kIbhLVOFp4CLTUILvOnLm0IMVpF7NtJhD0L7wbEC3iIF1UVAfixj/XaggT+jOuHzYJTowAbYHX7gUOUzTEmkAczy+6Aw7qoPwKYiJvbjM8PdqbMZ3ILtS3FmrlYEB9qi5J28J1R8t/LeG3gDsyabv52f0abFcfGQkPLsMbcEvBy2Xj2jQWVv+tK2a/0/iqCKRQydpEXkWP0Ae7YYqd4S+6sfe4zoUKHxLLCw/+jhI9sig+NZQau2zD24jx4INAdbaOwrd+udmqb2wtpQw/hwM9fARQXERy94VGRMxDUSSaOdv9dn/hJ1dawC7FNUk2HutTSBKquBPOB2aU=
template:
metadata:
name: rabbitmq-ca
namespace: cert-manager
labels:
app.kubernetes.io/name: rabbitmq-ca
type: kubernetes.io/tls

15
rabbitmq/certificate.yaml Normal file
View File

@@ -0,0 +1,15 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: rabbitmq
spec:
secretName: rabbitmq-cert
dnsNames:
- rabbitmq.pyrocufflink.blue
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: dch-ca
privateKey:
algorithm: ECDSA
rotationPolicy: Always

26
rabbitmq/definitions.json Normal file
View File

@@ -0,0 +1,26 @@
{
"rabbit_version": "3.13.4",
"vhosts": [
{
"name": "/",
"metadata": {
"description": "Default virtual host"
}
}
],
"users": [
{
"name": "xactmon",
"tags": []
}
],
"permissions": [
{
"user": "xactmon",
"vhost": "/",
"configure": "^xactmon\\..*",
"read": "^xactmon\\..*",
"write": "^xactmon\\..*"
}
]
}

1
rabbitmq/enabled_plugins Normal file
View File

@@ -0,0 +1 @@
[rabbitmq_auth_mechanism_ssl,rabbitmq_prometheus].

View File

@@ -0,0 +1,22 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: rabbitmq
labels:
- pairs:
app.kubernetes.io/instance: rabbitmq
app.kubernetes.io/part-of: rabbitmq
resources:
- namespace.yaml
- certificate.yaml
- rabbitmq.yaml
configMapGenerator:
- name: rabbitmq
files:
- ca.crt=ca/rabbitmq-ca.crt
- definitions.json
- enabled_plugins
- rabbitmq.conf

7
rabbitmq/namespace.yaml Normal file
View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: Namespace
metadata:
name: rabbitmq
labels:
app.kubernetes.io/component: rabbitmq
app.kubernetes.io/name: rabbitmq

17
rabbitmq/openssl.cnf Normal file
View File

@@ -0,0 +1,17 @@
[req]
distinguished_name = root_ca_dn
prompt = no
default_md = sha512
x509_extensions = root_ca
string_mask = utf8only
[root_ca_dn]
countryName = US
organizationName = Dustin C. Hatch
organizationalUnitName = RabbitMQ
commonName = RabbitMQ CA
[root_ca]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:true,pathlen:0
keyUsage = cRLSign, keyCertSign

Some files were not shown because too many files have changed in this diff Show More