240 Commits

Author SHA1 Message Date
bot
16e5b263ba music-assistant: Update to 2.6.3 2025-12-06 12:32:16 +00:00
707481c6fa fluent-bit: deploy DaemonSet
This DaemonSet runs Fluent Bit on all nodes in the cluster.  The
ConfigMap that contains the pipeline configuration is actually managed
by Ansible, so that it can remain in sync with the configuration used by
Fluent Bit on non-Kubernetes nodes.
2025-12-04 21:28:32 -06:00
3824f5f187 ssh-host-keys: Add pikvm-nvr2.m.p.b 2025-12-02 08:42:23 -06:00
740561b7b6 Merge pull request 'paperless-ngx: Update to 2.20.0' (#95) from updatebot/paperless-ngx into master
Reviewed-on: #95
2025-12-01 21:14:36 +00:00
d0193b0001 Merge pull request 'authelia: Update to 4.39.15' (#96) from updatebot/authelia into master
Reviewed-on: #96
2025-12-01 21:13:38 +00:00
e38a0e3d21 Merge pull request 'firefly-iii: Update to 6.4.9' (#94) from updatebot/firefly-iii into master
Reviewed-on: #94
2025-12-01 21:12:19 +00:00
9fd40e90c2 Merge pull request 'home-assistant: Update to 2025.10.4' (#88) from updatebot/home-assistant into master
Reviewed-on: #88
2025-12-01 20:36:05 +00:00
0af625cea1 crio-clean: Add script to clean container storage
I've noticed that from time to time, the container storage volume seems
to accumulate "dangling" containers.  These are paths under
`/var/lib/containers/storage/overlay` that have a bunch of content in
their `diff` sub-directory, but nothing else, and do not seem to be
mounted into any running containers.  I have not identified what causes
this, nor a simple and reliable way to clean them up.  Fortunately,
wiping the entire container storage graph with `crio wipe` seems to work
well enough.

The `crio-clean.sh` script takes care of safely wiping the container
storage graph on a given node.  It first drains the node and then stops
any running containers that were left.  Then, it uses `crio wipe` to
clean the entire storage graph.  Finally, it restarts the node, allowing
Kubernetes to reschedule the pods that were stopped.
2025-12-01 14:28:35 -06:00
1fc1c5594e v-m: Scrape PiKVM metrics
PiKVM exports some rudimentary metrics, but requires authentication to
scrape them.  At the very least, this will provide alerting in case the
PiKVM systems go offline.
2025-12-01 12:19:15 -06:00
bot
dd55743d97 authelia: Update to 4.39.15 2025-11-29 12:32:16 +00:00
bot
269f30b33b paperless-ngx: Update to 2.20.0 2025-11-29 12:32:13 +00:00
bot
77ac86ffec firefly-iii: Update to 6.4.9 2025-11-29 12:32:11 +00:00
bot
67b32ecb77 zwavejs2mqtt: Update to 11.8.1 2025-11-29 12:32:07 +00:00
bot
5b6ea8c043 zigbee2mqtt: Update to 2.6.3 2025-11-29 12:32:07 +00:00
bot
47850aa0cf piper: Update to 2.1.2 2025-11-29 12:32:07 +00:00
bot
7b784db119 whisper: Update to 3.0.2 2025-11-29 12:32:07 +00:00
bot
72e7d0fbd8 home-assistant: Update to 2025.11.3 2025-11-29 12:32:06 +00:00
8032458ecc jenkins: updatecheck: Pin to VM nodes
Until I get the storage VLAN connected to the Raspberry Pi cluster, any
Pod that needs a PV backed by the Synology has to run on a VM node.
2025-11-24 07:32:26 -06:00
b7a7e4f6b4 jenkins: Add CronJob for updatecheck
`updatecheck` is a little utility I wrote that queries Fedora Bodhi for
updates and sends an HTTP request when one is found.  I am specifically
going to use it to trigger rebuilding the _gasket-driver_ RPM whenever
there is a new _kernel_ published.
2025-11-23 10:29:20 -06:00
a544860a62 jenkins: Add Generic Webhook trigger token secret
To restrict access to the Generic Webhook trigger operation, we can use
a pre-shared secret token, which must be included in requests.
2025-11-22 10:13:56 -06:00
74cc3c690e Merge remote-tracking branch 'refs/remotes/origin/master' 2025-11-22 10:09:08 -06:00
2af9f45cce Merge pull request 'paperless-ngx: Update to 2.19.2' (#89) from updatebot/paperless-ngx into master
Reviewed-on: #89
2025-11-22 15:52:25 +00:00
847a3c64cd Merge pull request 'firefly-iii: Update to 6.4.5' (#91) from updatebot/firefly-iii into master
Reviewed-on: #91
2025-11-22 15:50:22 +00:00
3b84e869bf Merge pull request 'ntfy: Update to 2.15.0' (#93) from updatebot/ntfy into master
Reviewed-on: #93
2025-11-22 15:49:13 +00:00
f1087fa73d Merge pull request 'authelia: Update to 4.39.14' (#92) from updatebot/authelia into master
Reviewed-on: #92
2025-11-22 15:48:05 +00:00
3478ceeeb9 updatebot: Add Music Assistant 2025-11-22 09:47:05 -06:00
27de8ca430 jenkins: Use a single PV for all Buildroot jobs
Instead of allocating a volume for each individual Buildroot-based
project, I think it will be easier to reuse the same one for all of
them.  It's not like we can really run more than one job at a time,
anyway.
2025-11-22 09:12:28 -06:00
957d170a69 jenkins: Add kmod-signing-cert secret
This secret contains the certificate and private key for signing kernel
modules (i.e. `gasket-driver` for the Google Coral EdgeTPU).
2025-11-22 09:11:06 -06:00
bot
a781f1ece4 authelia: Update to 4.39.14 2025-11-22 12:32:14 +00:00
bot
bc96c07815 ntfy: Update to 2.15.0 2025-11-22 12:32:12 +00:00
bot
1cd7e39982 gotenberg: Update to 8.25.0 2025-11-22 12:32:10 +00:00
bot
62d136153b paperless-ngx: Update to 2.19.6 2025-11-22 12:32:10 +00:00
bot
0841fe9288 firefly-iii: Update to 6.4.8 2025-11-22 12:32:08 +00:00
f47759749e authelia: Add redirect URL for Headlamp
Now that Headlamp supports PKCE, we can use the same OIDC client for it
as for the Kubneretes API server/`kubectl`.  The only difference is the
callback redirect URL
2025-11-21 08:40:39 -06:00
8f1c8980c2 authelia: Fix Jenkins OIDC token auth method
The latest version of the _OpenId Connect Authentication Plugin_ for
Jenkins has several changes.  Apparently, one of them is that it
defaults to using the `client_secret_basic` token authorization method,
instead of `client_secret_post` as it did previously.
2025-11-18 19:14:15 -06:00
f1b473249d jenkins: Update to 2.528.2-lts 2025-11-18 17:16:31 -06:00
f1ad556a3c h-a: Update mobile apps group
We've both gotten new phones recently, but I never remember to update
the "mobile apps group" that we use to have messages sent to both
devices.
2025-11-18 09:27:35 -06:00
2cd55ee2ae headlamp: Deploy Headlamp
Now that upstream has finally added support for PKCE with OIDC
authentication, we can actually use Headlamp as a web application.
2025-11-13 18:35:51 -06:00
da7d517d8c music-assistant: Update to v2.6.2 2025-11-09 10:14:20 -06:00
82c37a8dff v-m/scrape: Remove Promtail job 2025-11-09 10:21:49 -06:00
fab045223a home-assistant: Add MQTT password for mqttwol 2025-11-05 08:56:17 -06:00
1d3652055b Merge pull request 'firefly-iii: Update to 6.4.3' (#90) from updatebot/firefly-iii into master
Reviewed-on: #90
2025-11-01 13:31:57 +00:00
bot
46ec4acda3 firefly-iii: Update to 6.4.3 2025-11-01 11:32:22 +00:00
89a92680dc Merge branch 'rustdesk' 2025-10-22 08:47:13 -05:00
0965148f93 firefly-iii: Enable Webhooks
At some point, Firefly III added an `ALLOW_WEBHOOKS` option.  It's set
to `false` by default, but it didn't seem to have any affect on
_running_ webhooks, only visiting the webhooks configuraiton page.  Now,
that seems to have changed, and the setting needs to be enabled in order
for the webhooks to run.

I'm not sure why `disableNameSuffixHash` was set on the ConfigMap
generator.  It shouldn't be, so that Kustomize can ensure the Pod is
restarted when the contents of the ConfigMap change.
2025-10-20 20:12:24 -05:00
d7bff98443 Merge pull request 'authelia: Update to 4.39.13' (#87) from updatebot/authelia into master
Reviewed-on: #87
2025-10-19 21:00:41 +00:00
3f2da99fbe Merge pull request 'firefly-iii: Update to 6.3.2' (#81) from updatebot/firefly-iii into master
Reviewed-on: #81
2025-10-19 20:58:18 +00:00
4ad705756d Merge pull request 'home-assistant: Update to 2025.9.4' (#84) from updatebot/home-assistant into master
Reviewed-on: #84
2025-10-19 20:49:21 +00:00
33ee59cb90 firefly-iii: Add network policy
This network policy blocks all outbound communication except to the
designated internal services.  This will help prevent any data
exfiltration in the unlikely event the Firefly were to be compromised.
2025-10-19 15:46:49 -05:00
bot
ca14871d8c authelia: Update to 4.39.13 2025-10-18 11:32:19 +00:00
bot
ffaa0bb1ae firefly-iii: Update to 6.4.2 2025-10-18 11:32:15 +00:00
bot
1558368897 zwavejs2mqtt: Update to 11.5.2 2025-10-18 11:32:11 +00:00
bot
79ab42b673 zigbee2mqtt: Update to 2.6.2 2025-10-18 11:32:11 +00:00
bot
e36d3270fd home-assistant: Update to 2025.10.3 2025-10-18 11:32:10 +00:00
17075713c2 keepalived: Update container image tag
The _dev_ tag has gone away, but this image has CI now, so a _latest_
tag is available instead.
2025-10-17 09:40:18 -05:00
b28e5a1104 keepalived: Add instance for Rust Desk
Rust desk uses several TCP and UDP ports, so we need to allocate a
service IP address for it.
2025-10-17 09:38:44 -05:00
7e39883946 rustdesk: Initial deployment
Rust Desk is a remote assistance software solution.  The open source
edition is sufficient for what I want to do with it, namely: help Mom
and Dad troubleshoot issues on their PCs.  Mom is currently having
trouble with the Nextcloud sync client, so I need to be able to help her
with that.
2025-10-17 09:15:35 -05:00
bbcf2d7599 grafana: Increase readiness probe timeout
Sometimes, Grafana gets pretty slow, especially when it's running on one
of the Raspberry Pi nodes.  When this happens, the health check may take
longer than the default timeout of 1 second to respond.  This then marks
the pod as unhealthy, even though it's still working.
2025-10-13 13:36:38 -05:00
d5a7b5bc2d k8s-reboot-coordinator: Initial deploy
The `k8s-reboot-coordinator` coordinates node reboots throughout the
cluster.  It runs as a DaemonSet, watching for the presence of a
sentinel file, `/run/reboot-needed` on the node.  When the file appears,
it acquires a lease, to ensure that only one node reboots at a time,
cordons and drains the node, and then triggers the reboot by running
a command on the host.  After the node has rebooted, the daemon will
release the lock and uncordon the node.
2025-10-13 13:36:38 -05:00
5c6a77c47c policy: Add policy to prevent host network usage
The `policy` Kustomize project defines various cluster-wide security
policies.  Initially, this includes a Validating Admission Policy that
prevents pods from using the host's network namespace.
2025-10-13 13:36:38 -05:00
e1874565b8 Merge pull request 'gotenberg: Update to 8.23.1' (#85) from updatebot/paperless-ngx into master
Reviewed-on: #85
2025-10-12 23:55:49 +00:00
2e4d356fb7 Merge pull request 'authelia: Update to 4.39.10' (#86) from updatebot/authelia into master
Reviewed-on: #86
2025-10-12 23:40:26 +00:00
bot
76566cb027 authelia: Update to 4.39.12 2025-10-11 11:32:16 +00:00
bot
83d85d0b58 tika: Update to 3.2.3.0 2025-10-11 11:32:14 +00:00
bot
d944ae5d3a gotenberg: Update to 8.24.0 2025-10-11 11:32:14 +00:00
fd400eb1de home-assistant: Fix image refs for Zigbee/ZWaveJS
The _updatebot_ has been running with an old configuration for a while,
so while it was correctly identifying updates to ZWaveJS UI and
Zigbee2MQTT, it was generating overrides for the incorrect OCI image
names.
2025-09-14 15:47:31 -05:00
2ef22105a6 Merge pull request 'home-assistant: Update to 2025.8.0' (#77) from updatebot/home-assistant into master
Reviewed-on: #77
2025-09-14 20:09:37 +00:00
86546df447 Merge pull request 'paperless-ngx: Update to 2.18.2' (#82) from updatebot/paperless-ngx into master
Reviewed-on: #82
2025-09-14 03:05:37 +00:00
ff6d4fa6e3 Merge pull request 'authelia: Update to 4.39.8' (#83) from updatebot/authelia into master
Reviewed-on: #83
2025-09-14 03:04:39 +00:00
bot
9f78f01f14 authelia: Update to 4.39.9 2025-09-13 11:32:15 +00:00
bot
82680ae86e gotenberg: Update to 8.23.0 2025-09-13 11:32:13 +00:00
bot
959bef405f paperless-ngx: Update to 2.18.4 2025-09-13 11:32:13 +00:00
bot
fc3435a978 zwavejs2mqtt: Update to 11.2.1 2025-09-13 11:32:08 +00:00
bot
da2fcdcf28 zigbee2mqtt: Update to 2.6.1 2025-09-13 11:32:07 +00:00
bot
5873892015 piper: Update to 1.6.3 2025-09-13 11:32:07 +00:00
bot
38c0e8ba02 home-assistant: Update to 2025.9.2 2025-09-13 11:32:07 +00:00
7158ff89df v-m/alerts: Ignore Restic alert for Purple Pi
The Purple Pi is no more.  We want to keep it's backups around, though,
but we don't need alerts about them.
2025-09-12 07:25:21 -05:00
5869afa923 jenkins: Add PVC for airplaypi Buildroot job
Buildroot jobs really benefit from having a persistent workspace volume
instead of an ephemeral one.  This way, only the packages, etc. that
have changed since the last build need to be built, instead of the whole
toolchain and operating system.
2025-09-07 12:24:11 -05:00
4c1992b3c9 v-m/vmagent: Start in parallel
As with AlertManager, the point of having multiple replicas of `vmagent`
is so that one is always running, even if the other fails.  Thus, we
want to start the pods in parallel so that if the first one does not
come up, the second one at least has a chance.
2025-09-07 10:49:22 -05:00
25d34efb4c v-m/alertmanager: Bring up replicas in parallel
If something prevents the first AlertManager instance from starting, we
don't want to wait forever for it before starting the second.  That
pretty much defeats the purpose of having two instances.  Fortunately,
we can configure Kubernetes to bring up both instances simultaneously by
setting the pod management policyo to `Parallel`.
2025-09-07 10:42:50 -05:00
e605e3d1ea v-m/alertmanager: Migrate PVC to Synology
We also don't need a 4 GB volume for AlertManager; even 500 MB is
way too big for the tiny amount of data it stores, but that's about the
smallest size a filesystem can be.
2025-09-07 10:42:13 -05:00
ab38df1d9f Merge branch 'drop-certs' 2025-09-07 10:33:19 -05:00
a02dfa1dfc cert-manager: Decommission cert-exporter
The `cert-exporter` is no longer needed.  All websites manage their own
certificates with _mod_md_ now, and all internal applications that use
the wildcard certificate fetch it directly from the Kubernetes Secret.
2025-09-07 10:31:36 -05:00
b068a260e7 cert-manager: Drop HLC certificate
This site now obtains its own certificate using Apache _mod_md_.
2025-09-07 10:30:20 -05:00
479a91ae79 Merge branch 'democratic-csi' 2025-09-07 10:25:14 -05:00
87331b24b0 v-m/alerts: Ignore Restic alert for bw0
_bw0.pyrocufflink.blue_ has been decommissioned since some time, so it
doesn't get backed up any more.  We want to keep its previous backups
around, though, in case we ever need to restore something.  This
triggers the "no recent backups" alert, since the last snapshot is over
a week old.  Let's ignore that hostname when generating this alert.
2025-09-07 08:27:19 -05:00
7ad8fff7c6 v-m/vmagent: Use ephemeral storage
The `vmagent` needs a place to spool data it has not yet sent to
Victoria Metrics, but it doesn't really need to be persistent.  As long
as all of the `vmagent` nodes _and_ all of the `vminsert` nodes do not
go down simultaneously, there shouldn't be any data loss.  If they are
all down at the same time, there's probably something else going on and
lost metrics are the least concerning problem.
2025-09-07 08:27:19 -05:00
ee88e5f1c9 dynk8s-provisioner: Remove PVC
The _dynk8s-provisioner_ only needs writable storage to store copies of
the AWS SNS notifications it receives for debugging purposes.  We don't
need to keep these around indefinitely, so using ephemeral node-local
storage is sufficient.  I actually want to get rid of that "feature"
anyway...
2025-09-07 08:27:19 -05:00
cbed5a8d13 jenkins: Drop Gentoo Portage distribution
Now that Aimee OS is based on Buildroot instead of Gentoo, we don't need
to keep syncing and sharing the Gentoo repository.
2025-09-07 08:27:19 -05:00
e63fd199ec firefly-iii: Prefer running on amd64 nodes
Although Firefly III works on a Raspberry Pi, a few things are pretty
slow.  Notably, the search feature takes a really long time to return
any results, which is particularly annoying when trying to add a receipt
via the Receipts app.  Adding a node affinity rule to prefer running on
an x86_64 machine will ensure that it runs fast whenever possible, but
can fall back to running on a Rasperry Pi if necessary.
2025-09-07 08:27:19 -05:00
687775c595 invoice-ninja: Fix error in cron container
The "cron" container has not been working correctly for some time.  No
background tasks are getting run, and this error is printed in the log
every minute:

> `Target class [db.schema] does not exist`

It turns out, this is because of the way the PHP `artisan` tool works.
It MUST be able to write to the code directory, apparently to build some
kind of cache.  There may be a way to cache the data ahead of time, but
I haven't found it yet.  For now, it seems the only way to make
Laravel-based applications run in a container is to make the container
filesystem mutable.
2025-09-07 08:27:19 -05:00
0a89502620 20125: Add Music Assistant
Tabitha wants to see Music Assistant in the smart home status app,
mostly to use as a shortcut.
2025-09-07 08:27:19 -05:00
92cf0edc4b v-m/scrape: Scrape Music Assistant via Blackbox
Music Assistant doesn't expose any metrics natively.  Since we really
only care about whether or not it's accessible, scraping it with the
blackbox exporter is fine.
2025-09-07 08:27:19 -05:00
c011a99165 authelia: Allow from pyrocufflink.net
In order to allow access to Authelia from outside the LAN, it needs to
be able to handle the _pyrocufflink.net_ domain in addition to
_pyrocufflink.blue_.  Originally, this was not possible, as Authelia
only supported a single cookie/domain.  Now that it supports multiple
cookies, we can expose both domains.

The main reason for doing this now is use Authelia's password reset
capability for Mom, since she didn't have a password for her Nextcloud
account that she's just begun using.
2025-09-07 08:27:19 -05:00
7c9737e092 kitchen: Update DTEX calendar URL
I wrote a Thunderbird add-on for my work computer that periodically
exports my entire DTEX calendar to a file.  Unfortunately, the file it
creates is not directly usable by the kitchen screen server currently;
it seems to use a time zone identifier that `tzinfo` doesn't understand:

```
Error in background update:
Traceback (most recent call last):
  File "/usr/local/kitchen/lib64/python3.12/site-packages/kitchen/service/agenda.py", line 19, in _background_update
    await self._update()
  File "/usr/local/kitchen/lib64/python3.12/site-packages/kitchen/service/agenda.py", line 34, in _update
    calendar = await self.fetch_calendar(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/kitchen/lib64/python3.12/site-packages/kitchen/service/caldav.py", line 39, in fetch_calendar
    return icalendar.Calendar.from_ical(r.text)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/kitchen/lib64/python3.12/site-packages/icalendar/cal.py", line 369, in from_ical
    _timezone_cache[component['TZID']] = component.to_tz()
                                         ^^^^^^^^^^^^^^^^^
  File "/usr/local/kitchen/lib64/python3.12/site-packages/icalendar/cal.py", line 659, in to_tz
    return cls()
           ^^^^^
  File "/usr/local/kitchen/lib64/python3.12/site-packages/pytz/tzinfo.py", line 190, in __init__
    self._transition_info[0])
    ~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
```

It seems to work fine in Nextcloud, though, so the work-around is to
import it as a subscription in Nextcloud and then read it from there,
using Nextcloud as a sort of proxy.
2025-09-07 08:27:19 -05:00
28d6bdc3a9 kitchen: Pin to amd64 nodes
There is not (currently) an aarch64 build of the kitchen screen server,
so we need to force the pod to run on a x86_64 node.  This seems a good
candidate for running on a Raspberry Pi, so I should go ahead and build
a multi-arch image.
2025-09-07 08:27:19 -05:00
67a1d8d0d5 democratic-csi: Enable volume resize
_democratic-csi_ can also dynamically resize Synology iSCSI LUNs when
PVC resource requests increase.  This requires enabling the external
resizer in the controller pod and marking the StorageClass as supporting
resize.
2025-09-06 23:49:53 -05:00
d909fc0566 democratic-csi: Enable volume snapshot support
The _democratic-csi_ controller can create Synology LUN snapshots based
on VolumeSnapshot resources.  This feature can be used to e.g. create
data snapshots before upgrades, etc.
2025-09-06 23:43:25 -05:00
f3798c49e3 democratic-csi: Initial deployment
Deploying _democratic-csi_ to manage PersistentVolumeClaim resources,
mapping them to iSCSI volumes on the Synology.

Eventually, all Longhorn-managed PVCs will be replaced with Synology
iSCSI volumes.  Getting rid of Longhorn should free up a lot of
resources and remove a point of failure from the cluster.
2025-09-06 22:57:05 -05:00
e4f3e8254e Merge pull request 'ntfy: Update to 2.14.0' (#79) from updatebot/ntfy into master
Reviewed-on: #79
2025-08-16 19:20:11 +00:00
8e968703b3 Merge pull request 'authelia: Update to 4.39.6' (#80) from updatebot/authelia into master
Reviewed-on: #80
2025-08-16 19:17:48 +00:00
a5fdaff145 Merge pull request 'tika: Update to 3.2.2.0' (#78) from updatebot/paperless-ngx into master
Reviewed-on: #78
2025-08-16 19:17:18 +00:00
bot
6f3919fe06 authelia: Update to 4.39.6 2025-08-16 11:32:12 +00:00
bot
e140e9d49d ntfy: Update to 2.14.0 2025-08-16 11:32:10 +00:00
bot
f24285d761 tika: Update to 3.2.2.0 2025-08-16 11:32:09 +00:00
8a6b41bacc Revert "music-assistant: Tell players to restart on startup"
This hacky work-around is no longer necessary, as I've figured out why
the players don't (always) get rediscovered when the server restarts.
It turns out, Avahi on the firewall was caching responses to the mDNS PTR
requests Music Assistant makes.  Rather than forward the requests to the
other VLANs, it would respond with its cached information, but in a way
that Music Assistant didn't understand.  Setting `cache-entries-max` to
`0` in `avahi-daemon.conf` on the firewall resolved the issue.

This reverts commit 42a7964991.
2025-08-12 20:17:52 -05:00
e0e3eab8b6 Merge branch 'music-assistant' 2025-08-11 21:00:02 -05:00
42a7964991 music-assistant: Tell players to restart on startup
I haven't fully determined why, but when the Music Assistant server
restarts, it marks the _shairport-sync_ players as offline and will not
allow playing to them.  The only way I have found to work around this is
to restart the players after the server restarts.  As that's pretty
cumbersome and annoying, I naturally want to automate it, so I've
created this rudimentary synchronization technique using _ntfy_: each
player listens for notifications on a specific topic, and upon receiving
one, tells _shairport-sync_ to exit.  With the `Restart=` property
configured on the _shairport-sync.service_ unit, _systemd_ will restart
the service, which causes Music Assistant to discover the player again.
2025-08-11 20:59:54 -05:00
ae1d952297 music-assistant: Initial deployment
_Music Assistant_ is pretty straightforward to deploy, despite
upstream's apparent opinion otherwise.  It just needs a small persistent
volume for its media index and customization.  It does need to use the
host network namespace, though, in order to receive multicast
announcements from e.g. AirPlay players, as it doesn't have any way of
statically configuring them.
2025-08-11 20:43:28 -05:00
2a0fdc07df cert-manager: Drop dustinandtabitha.com certificate
This site now obtains its own certificate using Apache _mod_md_.
2025-08-11 08:59:57 -05:00
4977f513c5 dch-webhooks: Add role for Jenkins to deploy
Jenkins needs to be able to patch the Deployment to trigger a restart
after it builds a new container image for _dch-webhooks_.

Note that this manifest must be applied on its own **without
Kustomize**.  Kustomize seems to think the `dch-webhooks` in
`resourceNames` refers to the ConfigMap it manages and "helpfully"
renames it with the name suffix hash.  It's _not_ the ConfigMap, though,
but there's not really any way to tell it this.
2025-08-10 17:43:02 -05:00
3960552f99 calico: Update to v3.30.2 2025-08-08 11:00:27 -05:00
aa27579582 cert-manager: Drop dustin.hatch.name certificate
This site now obtains its own certificate using Apache _mod_md_.
2025-08-07 11:26:23 -05:00
2b109589c2 h-a/{piper,whisper}: Prefer x86_64 nodes
Without a node affinity rule, Kubernetes applies equal weight to the
"big" x86_64 nodes and the "small" aarch64 ones.  Since we would really
rather Piper and Whisper _not_ run on a Raspberry Pi, we need the rule
to express this.
2025-08-07 10:31:10 -05:00
ea4e45e479 Revert "h-a: Schedule Piper, Whisper, Mosquitto with HA"
As it turns out, although Home Assistant itself works perfectly fine on
a Raspberry Pi, Piper and Whisper do not.  They are _much_ too slow to
respond to voice commands.

This reverts commit 32666aa628.
2025-08-07 10:26:37 -05:00
3896dd67eb Merge pull request 'home-assistant: Update to 2025.7.2' (#73) from updatebot/home-assistant into master
Reviewed-on: #73
2025-08-05 14:17:24 +00:00
c5545445b6 Merge pull request 'firefly-iii: Update to 6.2.21' (#74) from updatebot/firefly-iii into master
Reviewed-on: #74
2025-08-03 16:41:17 +00:00
2a7d531aa3 Merge pull request 'authelia: Update to 4.39.5' (#75) from updatebot/authelia into master
Reviewed-on: #75
2025-08-03 16:35:18 +00:00
1998abefbd Merge pull request 'vaultwarden: Update to 1.34.3' (#76) from updatebot/vaultwarden into master
Reviewed-on: #76
2025-08-03 16:34:09 +00:00
1ec974fa2d v-m/alerts: Add alert for Internet down 2025-08-03 11:29:41 -05:00
bot
b2aa70dff0 vaultwarden: Update to 1.34.3 2025-08-02 11:32:29 +00:00
bot
28c7f98cb5 authelia: Update to 4.39.5 2025-08-02 11:32:19 +00:00
bot
14d6af7886 firefly-iii: Update to 6.2.21 2025-08-02 11:32:11 +00:00
bot
a4d05c7288 zwavejs2mqtt: Update to 11.0.1 2025-08-02 11:32:07 +00:00
bot
c10aef5d65 zigbee2mqtt: Update to 2.6.0 2025-08-02 11:32:07 +00:00
bot
474b068708 home-assistant: Update to 2025.7.4 2025-08-02 11:32:06 +00:00
024eaf241f Merge remote-tracking branch 'refs/remotes/origin/master' 2025-07-29 21:56:18 -05:00
a6618cac11 h-a: Update taints for Zigbee/Zwave controllers
With the introduction of the two new Raspberry Pi nodes that I intend to
be used for anything that supports running on aarch64, I'm eliminating
the `du5t1n.me/machine=raspberrypi` taint.  It no longer makes sense, as
the only node that has it is the Zigbee/ZWave controller.  Having
dedicated taints for those roles is much more clear.
2025-07-29 21:39:21 -05:00
8b492d059d xactmon: Pin to x86_64 nodes
There are no ARM builds of the `xactmon` components.
2025-07-29 21:38:06 -05:00
812b09626f cert-manager: Drop chmod777.sh certificate
This site now obtains its own certificate using Apache _mod_md_.
2025-07-28 18:59:06 -05:00
32666aa628 h-a: Schedule Piper, Whisper, Mosquitto with HA
Using pod affinity rules, we can schedule the ancillary processes for
Home Assistant to run on the same node as the main server.
2025-07-27 18:39:55 -05:00
7b440c44ec h-a: Prefer running on a Raspberry Pi
Now that we have Raspberry Pi CM4 worker nodes, let's configure Home
Assistant to run on one, since it's pretty much designed to.
2025-07-27 18:35:07 -05:00
6d2aa9c391 20125: Set log level
Only errors are logged by default, which is less than helpful when
troubleshooting a running but apparently misbehaving application...
2025-07-27 18:20:27 -05:00
b989a7898e 20125: Pin to amd64 nodes
There is no ARM build of the 20125 `status-server`, so we have to pin
the pod to amd64 nodes to prevent it from being scheduled on a Raspberry
Pi.
2025-07-27 18:19:58 -05:00
921fadc44b 20125: Fix website URL anchors
As it turns out, it's not possible to reuse a YAML anchor.  At least in
Rust's `serde_yaml`, only the final definition is used.  All references,
even those that appear before the final definition, use the same
definition.  Thus, each application that refers to its own URL in its
match criteria needs a unique anchor.
2025-07-27 18:16:30 -05:00
4dc21e6179 sshca: Add machine IDs for CM4 cluster nodes
* _ctrl-2ed83d.k8s.pyrocufflink.black_
* _node-6a3f8.k8s.pyrocufflink.black_
* _node-6ed191.k8s.pyrocufflink.black_
2025-07-27 17:42:43 -05:00
972831d15f 20125: Fix alert selector for Jellyfin
Jellyfin is not scraped by the Blackbox exporter, but rather exposes its
own metrics.
2025-07-27 17:40:54 -05:00
38ee60e099 v-m: Add alerts for Firefly, Paperless, phpipam
_Firefly III_ and _phpipam_ don't export any Prometheus metrics, so we
have to scrape them via the Blackbox Exporter.

Paperless-ngx only exposes metrics via Flower, but since it runs in the
same container as the main application, we can assume that if the former
is unavailable, the latter is as well.
2025-07-27 17:39:28 -05:00
fac4b92b71 cert-manager: Drop hatch.chat certificate
The _hatch.chat_ Matrix server has been gone for quite some time.
2025-07-23 11:59:28 -05:00
81f8c58816 cert-manager: Drop tabitha.biz certificate
This site now obtains its own certificate using Apache _mod_md_.
2025-07-23 11:41:09 -05:00
592ff3ce9e cert-manager: Drop apps.d.x certificate
This site now obtains its own certificate using Apache _mod_md_.
2025-07-23 11:29:34 -05:00
36015084c8 ansible: Allow host-provisioner to read root CA
The Kubernetes root CA certificate is stored in a ConfigMap named
`kube-root-ca.crt` in every namespace.  The _host-provisioner_ needs to
be able to read this ConfigMap in order to prepare control plane nodes,
as it is used by HAProxy to check the health of the API servers running
on each node.
2025-07-23 10:50:24 -05:00
484c17c1d5 authelia: Add address, phone scopes for Jenkins
Not sure why suddenly these need to be granted, but without them, I
cannot log in to Jenkins.
2025-07-22 15:26:29 -05:00
e845e66262 restic: pin to 0.18.0
Let's keep the version of `restic` used by the prune job in sync with
the latest version in Fedora.
2025-07-21 18:58:57 -05:00
717f9244e7 kubelet-csr-approver: Initial commit
The [kubelet-csr-approver][0] is a controller that automatically approves
CSRs for Kublets that match certain criteria.  I've had it deployed in
the cluster for a while, but apparently never committed the resources.
These manifest files are taken from the [k8s deployment example][1] in
the upstream repository.

[0]: https://github.com/postfinance/kubelet-csr-approver
[1]: https://github.com/postfinance/kubelet-csr-approver/tree/v1.2.10/deploy/k8s
2025-07-21 18:49:44 -05:00
da2b1e60cd autoscaler: Set imagePullPolicy: IfNotPresent
We don't want to pull public container images that already exist.  This
creates prevents pods from starting if there is any connectivity issue
with the upstream registry.
2025-07-21 17:17:16 -05:00
810134e9bc authelia: Set imagePullPolicy: IfNotPresent
We don't want to pull public container images that already exist.  This
creates prevents pods from starting if there is any connectivity issue
with the upstream registry.
2025-07-21 17:16:32 -05:00
7fd613ccaf ara: Set imagePullPolicy: IfNotPresent
We don't want to pull public container images that already exist.  This
creates prevents pods from starting if there is any connectivity issue
with the upstream registry.
2025-07-21 17:14:06 -05:00
68c7e0d6cc argocd: Set imagePullPolicy: IfNotPresent
We don't want to pull public container images that already exist.  This
creates prevents pods from starting if there is any connectivity issue
with the upstream registry.
2025-07-21 15:07:01 -05:00
5da80c6a55 ntfy: Set imagePullPolicy: IfNotPresent
We don't want to pull public container images that already exist.  This
creates prevents pods from starting if there is any connectivity issue
with the upstream registry.
2025-07-21 15:07:01 -05:00
32132842be firefly-iii: Set imagePullPolicy: IfNotPresent
We don't want to pull public container images that already exist.  This
creates prevents pods from starting if there is any connectivity issue
with the upstream registry.
2025-07-21 15:07:01 -05:00
0822afe0b3 kitchen: Round weather metrics
Home Assistant has started sending the full sensor values for weather
metrics to Prometheus, even though their precision is way beyond their
accuracy.  We don't need to see 4+ decimal points for these on the
Kitchen display, so let's round the values when we query.
2025-07-21 14:40:35 -05:00
e51878fa92 ansible: Allow h-p to update scrape-collectd CM
The `scrape-collectd` ConfigMap in the `default` namespace is used by
Victoria Metrics to identif the hosts from which it should scrape
collectd metrics.  When deploying new machines that are _not_ part of
the Kubernetes cluster, we need to explicitly add them to this list.
The _host-provisioner_ can do this with an Ansible task, but it needs
the appropriate permissions to do so.
2025-07-21 12:24:00 -05:00
dbbe23aaa5 cert-manager: Add role for Jenkins to access certs
Ansible playbook running as Jenkins jobs need to be able to access the
Secret resources containing certificates issued by _cert-manager_ in
order to install them on managed nodes.  Although not all jobs do this
yet, eventually, the _cert-exporter_ will no longer be necessary, as the
_certs.git_ repository will not be used anymore.
2025-07-21 12:24:00 -05:00
d48dabca5b Merge remote-tracking branch 'refs/remotes/origin/master' 2025-07-21 12:02:44 -05:00
16dec1cdec ssh-host-keys: Do not specify a namespace
We don't want to hard-code a namespace for the `ssh-known-hosts`
ConfigMap because that makes it less useful for other projects besides
Jenkins.  Instead, we omit the namespace specification and allow
consumers to specify their own.

The _jenkins_ project doesn't have a default namespace, since it
specifies resources in the `jenkins` and `jenkins-jobs` namespaces, we
need to create a sub-project to set the namespace for the
`ssh-known-hosts` ConfigMap.
2025-07-21 11:47:39 -05:00
959959155c Merge pull request 'home-assistant: Update to 2025.7.1' (#69) from updatebot/home-assistant into master
Reviewed-on: #69
2025-07-16 21:55:57 +00:00
b36c132364 Merge pull request 'ntfy: Update to 2.13.0' (#72) from updatebot/ntfy into master
Reviewed-on: #72
2025-07-16 21:49:29 +00:00
dc31ae1cae Merge pull request 'tika: Update to 3.2.1.0' (#71) from updatebot/paperless-ngx into master
Reviewed-on: #71
2025-07-16 21:45:03 +00:00
bot
05048cbaa1 ntfy: Update to 2.13.0 2025-07-12 11:32:13 +00:00
bot
434d420e28 tika: Update to 3.2.1.0 2025-07-12 11:32:11 +00:00
bot
bab05add07 mosquitto: Update to 2.0.22 2025-07-12 11:32:06 +00:00
bot
467365922a zwavejs2mqtt: Update to 10.9.0 2025-07-12 11:32:06 +00:00
bot
0815350de8 zigbee2mqtt: Update to 2.5.1 2025-07-12 11:32:06 +00:00
bot
d48ebb4292 piper: Update to 1.6.2 2025-07-12 11:32:06 +00:00
bot
7ddaf5bda8 home-assistant: Update to 2025.7.1 2025-07-12 11:32:05 +00:00
9645abef5e home-assistant: Pull Zigbee/ZWave images from ghcr
Getting around Docker Hub rate limiting
2025-07-07 08:46:04 -05:00
8491d2ded7 v-m: Switch to quay.io for container images
Docker Hub has blocked ("rate limited") my IP address.  Moving as much
as I can to use images from other sources.  Hopefully they'll unblock me
soon and I can deploy a caching proxy.
2025-07-07 08:43:20 -05:00
ff1e13a5d7 Merge remote-tracking branch 'refs/remotes/origin/master' 2025-07-07 08:43:10 -05:00
093e909475 v-m/scrape: Scrape Victoria Logs 2025-07-06 15:20:16 -05:00
61460e56e9 20125: Mark MinIO backups alerts as system-wide
Backups failing may not prevent services from operating correctly, but
we do want to have visibility into that.
2025-07-06 12:27:07 -05:00
9d18173b3e Merge pull request 'firefly-iii: Update to 6.2.20' (#70) from updatebot/firefly-iii into master
Reviewed-on: #70
2025-07-05 16:08:07 +00:00
bot
52f999fe93 firefly-iii: Update to 6.2.20 2025-07-05 11:32:18 +00:00
cc83a5115a v-m/scrape: Scrape MinIO metrics 2025-07-02 10:29:53 -05:00
370c8486fa authelia: Set claims policy for MinIO
MinIO console needs access to the *groups* scope in order to assign the
correct permissions to users as they log in.
2025-07-01 11:54:01 -05:00
6e2cbeb102 ansible: Add service account for host-provisioner
The _k8s-worker_ Ansible role in the configuration policy now uses the
Kubernetes API to create bootstrap tokens for adding worker nodes to the
cluster.  For this to work, the pod running the host-provisioner must be
associated with a service account that has the correct permissions to
create secrets and access the `cluster-info` ConfigMap.
2025-06-30 16:16:28 -05:00
9d09b9584b Merge pull request 'home-assistant: Update to 2025.6.3' (#67) from updatebot/home-assistant into master
Reviewed-on: #67
2025-06-28 14:27:15 +00:00
e46798b725 Merge pull request 'firefly-iii: Update to 6.2.19' (#68) from updatebot/firefly-iii into master
Reviewed-on: #68
2025-06-28 14:27:02 +00:00
bot
bcd53d2819 firefly-iii: Update to 6.2.19 2025-06-28 11:32:13 +00:00
bot
839b8dbcdc home-assistant: Update to 2025.6.3 2025-06-28 11:32:07 +00:00
404137c4c8 h-a/whisper: Set writable cache dir for HF models
Whisper now needs a writable location for downloading models from
Hugging Face Hub.  The default location is `~/.cache/huggingface/hub`,
but this is not writable in our container.  The path can be controlled
via one of several environment variables, but we're setting `HF_HOME` as
it is sets the top level directory for several related paths.
2025-06-21 14:22:42 -05:00
8e38813d83 Merge pull request 'home-assistant: Update to 2025.4.4' (#61) from updatebot/home-assistant into master
Reviewed-on: #61
2025-06-21 19:15:14 +00:00
7d7199ee10 Merge pull request 'paperless-ngx: Update to 2.17.1' (#66) from updatebot/paperless-ngx into master
Reviewed-on: #66
2025-06-21 19:01:39 +00:00
8a5e8ed720 Merge branch 'xactmon-firefly-token' 2025-06-21 14:00:45 -05:00
fdb4bdb23d Merge branch 'unifi' 2025-06-21 14:00:38 -05:00
1ce3e7ef43 Merge branch 'xactmon-fix-chase' 2025-06-21 14:00:35 -05:00
75edfb74cb v-m/scrape: Increase timeout for k8s job
Scraping metrics from the Kubernetes API server has started taking 20+
seconds recondly.  Until I figure out the underlying cause, I'm
increasing the scrape timeout so that the _vmagent_ doesn't give up and
report the API server as "down."
2025-06-21 13:55:23 -05:00
4106038fe9 cert-manager: Use recursive resolver for checks
I've completely blocked all outgoing unencrypted DNS traffic at the
firewall now, which prevents _cert-manager_ from using its default
behavior of using the authoritative name servers for its managed domains
to check poll for ACME challenge DNS TXT record availability.
Fortunately, it has an option to use a recursive resolver (i.e. the
network-provided DNS server) instead.
2025-06-21 13:55:23 -05:00
f4b0d43d25 Merge pull request 'firefly-iii: Update to 6.2.18' (#65) from updatebot/firefly-iii into master
Reviewed-on: #65
2025-06-21 18:36:44 +00:00
bot
6bbd5b89cd gotenberg: Update to 8.21.1 2025-06-21 11:32:18 +00:00
bot
4744e663f1 paperless-ngx: Update to 2.17.1 2025-06-21 11:32:18 +00:00
bot
eb5d31edca firefly-iii: Update to 6.2.18 2025-06-21 11:32:15 +00:00
bot
555ce06992 zwavejs2mqtt: Update to 10.7.0 2025-06-21 11:32:12 +00:00
bot
a391338cfa zigbee2mqtt: Update to 2.4.0 2025-06-21 11:32:12 +00:00
bot
e1e8f86c92 piper: Update to 1.5.4 2025-06-21 11:32:12 +00:00
bot
de5d3bf87c whisper: Update to 2.5.0 2025-06-21 11:32:12 +00:00
bot
c9d3302be1 home-assistant: Update to 2025.6.1 2025-06-21 11:32:11 +00:00
25644150fa Merge pull request 'firefly-iii: Update to 6.2.10' (#60) from updatebot/firefly-iii into master
Reviewed-on: #60
2025-06-15 15:35:17 +00:00
cd8a8b7002 Merge pull request 'paperless-ngx: Update to 2.16.3' (#64) from updatebot/paperless-ngx into master
Reviewed-on: #64
2025-06-15 14:54:10 +00:00
50f0f83dcc Merge pull request 'ntfy: Update to 2.12.0' (#62) from updatebot/ntfy into master
Reviewed-on: #62
2025-06-14 21:58:39 +00:00
abcd007948 home-assistant: Deploy mqtt2vl
`mqtt2vl` is a relatively simple service I developed to read log
messages from an MQTT topic (i.e. those published by ESPHome devices)
and stream them to Victoria Logs over HTTPS.
2025-06-14 16:55:12 -05:00
bot
4d9598af73 ntfy: Update to 2.12.0 2025-06-14 11:32:25 +00:00
bot
81e58e85d0 tika: Update to 3.2.0.0 2025-06-14 11:32:23 +00:00
bot
914dfccb8f paperless-ngx: Update to 2.16.3 2025-06-14 11:32:23 +00:00
bot
86abf880d6 firefly-iii: Update to 6.2.17 2025-06-14 11:32:14 +00:00
e0af6e0549 argocd/apps/grafana: Enable auto sync 2025-06-05 07:09:00 -05:00
9b1a5ef14f grafana: Add Victoria Logs data source 2025-06-05 07:07:55 -05:00
eb754d9112 grafana: Update to 11.5.5
The legacy alerting feature (which we never used) has been deprecated
for a long time and removed in Grafana 11.  The corresponding
configuration block must be removed from the config file or Grafana will
not start.
2025-06-05 07:06:40 -05:00
721d82eac3 paperless-ngx: Make /run writable
The latest version of Paperless-ngx needs a writable `/run` or it will
not even start.
2025-06-05 07:00:59 -05:00
92cf2c1b77 authelia: Update config for 4.39
Authelia made breaking changes to the OIDC issuer configuration in 4.39,
specifically around what claims are present in identity tokens.  Without
a claims policy set, clients will _not_ get the correct claims, which
breaks authentication and authorization in many cases (including
Kubernetes).

While I was fixing that, I went ahead and fixed a few of the other
deprecation warnings.  There are still two that show up at startup, but
fixing them will be a bit more involved, it seems.
2025-06-05 07:00:50 -05:00
85236243c2 Merge remote-tracking branch 'refs/remotes/origin/master' 2025-06-04 07:02:51 -05:00
fb1ef70dd3 Merge pull request 'authelia: Update to 4.39.1' (#59) from updatebot/authelia into master
Reviewed-on: #59
2025-06-03 23:58:31 +00:00
25da978286 Merge pull request 'gotenberg: Update to 8.18.0' (#58) from updatebot/paperless-ngx into master
Reviewed-on: #58
2025-06-03 23:58:12 +00:00
1c936943a0 Merge pull request 'vaultwarden: Update to 1.34.1' (#63) from updatebot/vaultwarden into master
Reviewed-on: #63
2025-06-03 23:54:14 +00:00
bot
f45a8de0c1 vaultwarden: Update to 1.34.1 2025-05-31 11:32:18 +00:00
bot
d27934a211 authelia: Update to 4.39.4 2025-05-31 11:32:17 +00:00
bot
1f02ad70da gotenberg: Update to 8.21.0 2025-05-31 11:32:12 +00:00
bot
8e1ac08d15 paperless-ngx: Update to 2.16.2 2025-05-31 11:32:12 +00:00
eb912adb6d xactmon: Renew Firefly-III API token 2025-05-04 14:39:39 +00:00
43d5d7f39e home-assistant: Run as root in user namespace
Beginning with Home Assistant 2024.12, it is no longer possible to use
custom integrations if the container is running as an unprivileged user.
Fortunately, it can be "tricked" by running as root in an unprivileged
user namespace.

https://github.com/blakeblackshear/frigate-hass-integration/issues/762
https://github.com/home-assistant/core/issues/132336
2025-04-20 17:04:17 -05:00
aebdbc2e12 Merge pull request 'home-assistant: Update to 2025.3.4' (#57) from updatebot/home-assistant into master
Reviewed-on: #57
2025-04-20 21:31:11 +00:00
bot
e800d302ea zwavejs2mqtt: Update to 10.2.0 2025-04-19 11:32:07 +00:00
bot
8957bfc1f9 zigbee2mqtt: Update to 2.2.1 2025-04-19 11:32:07 +00:00
bot
54b287d85d home-assistant: Update to 2025.4.3 2025-04-19 11:32:06 +00:00
cf9eae14b4 restic: Add restic-prune CronJob
This CronJob schedules a periodic run of `restic forget`, which deletes
snapshots according to the specified retention period (14 daily, 4
weekly, 12 monthly).

This task used to run on my workstation, scheduled by a systemd timer
unit.  I've kept the same schedule and retention period as before.  Now,
instead of relying on my PC to be on and awake, the cleanup will occur
more regularly.  There's also the added benefit of getting the logs into
Loki.
2025-04-01 19:36:10 -05:00
5c819ef120 paperless-ngx: Work around PDF rendering errors
Occasionally, some documents may have odd rendering errors that
prevent the archival process from working correctly.  I'm less concerned
about the archive document than simply having a centralized storage for
paperwork, so enabling this "continue on soft render error" feature is
appropriate.  As far as I can tell, it has no visible effect for the
documents that could not be imported at all without it.
2025-03-31 06:16:41 -05:00
52094da8fd v-m/scrape: Remove unifi3, Zincati
*unifi3.pyrocufflink.blue* has been replaced by
*unifi-nuptials.host.pyrocufflink.black*.  The former was the last
Fedora CoreOS machine in use, so the entire Zincati scrape job is no
longer needed.
2025-03-29 08:10:50 -05:00
37890e32a1 xactmon/rules: Fix Chase regex for >$1k
Never had a transaction of over $1000 before!  Chase's e-mail messages
have a thousands separator that I wasn't expecting.
2025-03-18 19:27:37 +00:00
7c6b6f4ca4 Merge pull request 'firefly-iii: Update to 6.2.0' (#46) from updatebot/firefly-iii into master
Reviewed-on: #46
2025-03-15 13:07:40 +00:00
a5ce333c74 Merge pull request 'gotenberg: Update to 8.17.3' (#56) from updatebot/paperless-ngx into master
Reviewed-on: #56
2025-03-15 13:06:39 +00:00
cce7e56d02 Merge pull request 'zwavejs2mqtt: Update to 9.31.0' (#55) from updatebot/home-assistant into master
Reviewed-on: #55
2025-03-15 13:00:29 +00:00
bot
ec996f5872 gotenberg: Update to 8.17.3 2025-03-15 11:32:13 +00:00
bot
bb87deb888 firefly-iii: Update to 6.2.9 2025-03-15 11:32:11 +00:00
bot
0762238900 mosquitto: Update to 2.0.21 2025-03-15 11:32:09 +00:00
bot
6aa0b21848 zwavejs2mqtt: Update to 9.33.0 2025-03-15 11:32:09 +00:00
bot
05ebb147c1 zigbee2mqtt: Update to 2.1.3 2025-03-15 11:32:09 +00:00
bot
f907a31650 home-assistant: Update to 2025.3.3 2025-03-15 11:32:08 +00:00
8470af0558 receipts: Deploy Receipts management tool
This is a custom-built application for managing purchase receipts.  It
integrates with Firefly III to fill some of the gaps that `xactmon`
cannot handle, such as restaurant bills with tips, gas station
purchases, purchases with the HSA debit card, refunds, and deposits.

Photos of receipts can be taken directly within the application using
the User Media Web API, or uploaded as existing files.  Each photo is
associated with transaction data, including date, vendor, amount, and
general notes.  These data are also synchronized with Firefly whenever
possible.
2025-03-13 20:26:11 -05:00
b75d83cd32 sshca: Do not sign certs for root
We no longer need *root* in the list of authorized principals for user
certificates issued by SSHCA.
2025-03-04 19:23:49 -06:00
8f5129cbef dch-webhooks: Enable test hosts in provisioner
By default, the _pyrocufflink_ Ansible inventory plugin ignores VMs
whose names begin with `test-`.  This prevents Jenkins from failing to
apply policy to machines that it should not be managing.  The host
provisioner job, though, should apply policy to those machines, so we
need to disable that filter.
2025-03-04 19:23:49 -06:00
127 changed files with 3391 additions and 662 deletions

View File

@@ -14,6 +14,7 @@ system_wide:
- job: dns_recursive
- job: kubelet
- job: kubernetes
- job: minio-backups
- instance: db0.pyrocufflink.blue
- instance: gw1.pyrocufflink.blue
- instance: vmhost0.pyrocufflink.blue
@@ -31,49 +32,63 @@ applications:
- instance: homeassistant.pyrocufflink.blue
- name: Nextcloud
url: &url https://nextcloud.pyrocufflink.net/index.php
url: &url0 https://nextcloud.pyrocufflink.net/index.php
icon:
url: icons/nextcloud.png
alerts:
- instance: *url
- instance: *url0
- instance: cloud0.pyrocufflink.blue
- name: Invoice Ninja
url: &url https://invoiceninja.pyrocufflink.net/
url: &url1 https://invoiceninja.pyrocufflink.net/
icon:
url: icons/invoiceninja.svg
class: light-bg
alerts:
- instance: *url
- instance: *url1
- name: Jellyfin
url: &url https://jellyfin.pyrocufflink.net/
url: https://jellyfin.pyrocufflink.net/
icon:
url: icons/jellyfin.svg
alerts:
- instance: *url
- job: jellyfin
- name: Vaultwarden
url: &url https://bitwarden.pyrocufflink.net/
url: &url2 https://bitwarden.pyrocufflink.net/
icon:
url: icons/vaultwarden.svg
class: light-bg
alerts:
- instance: *url
- instance: *url2
- alertgroup: Bitwarden
- name: Paperless-ngx
url: &url https://paperless.pyrocufflink.blue/
url: &url3 https://paperless.pyrocufflink.blue/
icon:
url: icons/paperless-ngx.svg
alerts:
- instance: *url
- instance: *url3
- alertgroup: Paperless-ngx
- job: paperless-ngx
- name: Firefly III
url: &url https://firefly.pyrocufflink.blue/
url: &url4 https://firefly.pyrocufflink.blue/
icon:
url: icons/firefly-iii.svg
alerts:
- instance: *url
- instance: *url4
- name: Receipts
url: &url5 https://receipts.pyrocufflink.blue/
icon:
url: https://receipts.pyrocufflink.blue/static/icons/icon-512.png
alerts:
- instance: *url5
- name: Music Assistant
url: &url6 https://music.pyrocufflink.blue/
icon:
url: https://music.pyrocufflink.blue/apple-touch-icon.png
alerts:
- instance: *url6

View File

@@ -33,11 +33,16 @@ spec:
- name: status-server
image: git.pyrocufflink.net/packages/20125.home
imagePullPolicy: Always
env:
- name: RUST_LOG
value: info,status_server=debug
volumeMounts:
- mountPath: /usr/local/share/20125.home/config.yml
name: config
subPath: config.yml
readOnly: True
nodeSelector:
kubernetes.io/arch: amd64
imagePullSecrets:
- name: imagepull-gitea
volumes:

View File

@@ -32,6 +32,7 @@ spec:
containers:
- name: ara-api
image: quay.io/recordsansible/ara-api
imagePullPolicy: IfNotPresent
env:
- name: ARA_BASE_DIR
value: /etc/ara

View File

@@ -1,6 +1,19 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
transformers:
- |
apiVersion: builtin
kind: NamespaceTransformer
metadata:
name: namespace-transformer
namespace: ansible
unsetOnly: true
setRoleBindingSubjects: allServiceAccounts
fieldSpecs:
- path: metadata/namespace
create: true
labels:
- pairs:
app.kubernetes.io/instance: ansible
@@ -9,8 +22,6 @@ labels:
- pairs:
app.kubernetes.io/part-of: ansible
namespace: ansible
resources:
- ../dch-root-ca
- ../ssh-host-keys

View File

@@ -23,3 +23,148 @@ subjects:
- kind: ServiceAccount
name: dch-webhooks
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: host-provisioner
labels:
app.kubernetes.io/name: host-provisioner
app.kubernetes.io/component: host-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: host-provisioner
namespace: kube-public
annotations:
kubernetes.io/description: >-
Allows the host-provisioner to access the _cluster-info_ ConfigMap,
which it uses to get the connection details for the Kubernetes API
server, including the issuing CA certificate, to pass to `kubeadm
join` on a new worker node.
rules:
- apiGroups:
- ''
resources:
- configmaps
verbs:
- get
resourceNames:
- cluster-info
- kube-root-ca.crt
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: host-provisioner
annotations:
kubernetes.io/description: >-
Allows the host-provisioner to manipulate labels, taints, etc. on
nodes it adds to the cluster.
rules:
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: host-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: host-provisioner
subjects:
- kind: ServiceAccount
name: host-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: host-provisioner
namespace: kube-system
annotations:
kubernetes.io/description: >-
Allows the host-provisioner to create bootstrap tokens in order to
add new nodes to the Kubernetes cluster.
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- create
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: host-provisioner
namespace: kube-public
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: host-provisioner
subjects:
- kind: ServiceAccount
name: host-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: host-provisioner
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: host-provisioner
subjects:
- kind: ServiceAccount
name: host-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: host-provisioner
namespace: victoria-metrics
annotations:
kubernetes.io/description: >-
Allows the host-provisioner to update the scrape-collectd
ConfigMap when adding new hosts.
rules:
- apiGroups:
- ''
resources:
- configmaps
verbs:
- patch
- get
resourceNames:
- scrape-collectd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: host-provisioner
namespace: victoria-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: host-provisioner
subjects:
- kind: ServiceAccount
name: host-provisioner

View File

@@ -0,0 +1,16 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: csi-synology
namespace: argocd
spec:
destination:
server: https://kubernetes.default.svc
project: default
source:
path: democratic-csi
repoURL: https://git.pyrocufflink.blue/infra/kubernetes.git
targetRevision: master
syncPolicy:
automated:
prune: true

View File

@@ -11,3 +11,6 @@ spec:
path: grafana
repoURL: https://git.pyrocufflink.blue/infra/kubernetes.git
targetRevision: master
syncPolicy:
automated:
prune: true

View File

@@ -0,0 +1,18 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: &name receipts
namespace: argocd
labels:
vendor: dustin
spec:
destination:
server: https://kubernetes.default.svc
project: default
source:
path: *name
repoURL: https://git.pyrocufflink.blue/infra/kubernetes.git
targetRevision: master
syncPolicy:
automated:
prune: true

View File

@@ -24,6 +24,66 @@ configMapGenerator:
- policy.csv
patches:
- patch: |-
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: argocd-application-controller
spec:
template:
spec:
containers:
- name: argocd-application-controller
imagePullPolicy: IfNotPresent
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-notifications-controller
spec:
template:
spec:
containers:
- name: argocd-notifications-controller
imagePullPolicy: IfNotPresent
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-redis
spec:
template:
spec:
containers:
- name: redis
imagePullPolicy: IfNotPresent
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-repo-server
spec:
template:
spec:
containers:
- name: argocd-repo-server
imagePullPolicy: IfNotPresent
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-server
spec:
template:
spec:
containers:
- name: argocd-server
imagePullPolicy: IfNotPresent
- patch: |-
$patch: delete
apiVersion: apiextensions.k8s.io/v1

View File

@@ -54,7 +54,7 @@ spec:
- name: authelia
image: ghcr.io/authelia/authelia
env:
- name: AUTHELIA_JWT_SECRET_FILE
- name: AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET_FILE
value: /run/authelia/secrets/jwt.secret
- name: AUTHELIA_AUTHENTICATION_BACKEND_LDAP_PASSWORD_FILE
value: /run/authelia/secrets/ldap.password
@@ -127,9 +127,10 @@ spec:
tls:
- hosts:
- auth.pyrocufflink.blue
- auth.pyrocufflink.net
rules:
- host: auth.pyrocufflink.blue
http:
http: &http
paths:
- path: /
pathType: Prefix
@@ -138,4 +139,5 @@ spec:
name: authelia
port:
name: http
- host: auth.pyrocufflink.net
http: *http

View File

@@ -74,74 +74,95 @@ authentication_backend:
implementation: activedirectory
tls:
minimum_version: TLS1.2
url: ldaps://pyrocufflink.blue
address: ldaps://pyrocufflink.blue
user: CN=svc.authelia,CN=Users,DC=pyrocufflink,DC=blue
certificates_directory: /run/authelia/certs
identity_providers:
oidc:
claims_policies:
default:
id_token:
- groups
- email
- email_verified
- preferred_username
- name
clients:
- id: e20a50c2-55eb-4cb1-96ce-fe71c61c1d89
description: Jenkins
secret: >-
- client_id: e20a50c2-55eb-4cb1-96ce-fe71c61c1d89
client_name: Jenkins
client_secret: >-
$argon2id$v=19$m=65536,t=3,p=4$qoo6+3ToLbsZOI/BxcppGw$srNBfpIHqpxLh+VfVNNe27A1Ci9dCKLfB8rWXLNkv44
redirect_uris:
- https://jenkins.pyrocufflink.blue/securityRealm/finishLogin
response_types:
- code
scopes:
- openid
- groups
- profile
- email
- offline_access
- address
- phone
authorization_policy: one_factor
pre_configured_consent_duration: 8h
token_endpoint_auth_method: client_secret_post
- id: kubernetes
description: Kubernetes
token_endpoint_auth_method: client_secret_basic
- client_id: kubernetes
client_name: Kubernetes
public: true
claims_policy: default
redirect_uris:
- http://localhost:8000
- http://localhost:18000
- https://headlamp.pyrocufflink.blue/oidc-callback
authorization_policy: one_factor
pre_configured_consent_duration: 8h
- id: 1b6adbfc-d9e0-4cab-b780-e410639dc420
description: MinIO
secret: >-
- client_id: 1b6adbfc-d9e0-4cab-b780-e410639dc420
client_name: MinIO
client_secret: >-
$pbkdf2-sha512$310000$TkQ1BwLrr.d8AVGWk2rLhA$z4euAPhkkZdjcxKFD3tZRtNQ/R78beW7epJ.BGFWSwQdAme5TugNj9Ba.aL5TEqrBDmXRW0xiI9EbxSszckG5A
redirect_uris:
- https://burp.pyrocufflink.blue:9090/oauth_callback
- https://minio.backups.pyrocufflink.blue/oauth_callback
- id: step-ca
description: step-ca
claims_policy: default
- client_id: step-ca
client_name: step-ca
public: true
claims_policy: default
redirect_uris:
- http://127.0.0.1
pre_configured_consent_duration: 8h
- id: argocd
description: Argo CD
- client_id: argocd
client_name: Argo CD
claims_policy: default
pre_configured_consent_duration: 8h
redirect_uris:
- https://argocd.pyrocufflink.blue/auth/callback
secret: >-
client_secret: >-
$pbkdf2-sha512$310000$l/uOezgWjqe3boGLYAnKcg$uqn1FC8Lj2y1NG5Q91PeLfLLUQ.qtlKFLd0AWJ56owLME9mV/Zx8kQ2x7OS/MOoMLmUgKd4zogYKab2HGFr0kw
- id: argocd-cli
description: argocd CLI
- client_id: argocd-cli
client_name: argocd CLI
public: true
claims_policy: default
pre_configured_consent_duration: 8h
audience:
- argocd-cli
redirect_uris:
- http://localhost:8085/auth/callback
response_types:
- code
scopes:
- openid
- groups
- profile
- email
- groups
- offline_access
- id: sshca
description: SSHCA
- client_id: sshca
client_name: SSHCA
public: true
claims_policy: default
pre_configured_consent_duration: 4h
redirect_uris:
- http://127.0.0.1
@@ -157,17 +178,20 @@ log:
notifier:
smtp:
disable_require_tls: true
host: mail.pyrocufflink.blue
port: 25
address: 'mail.pyrocufflink.blue:25'
sender: auth@pyrocufflink.net
session:
domain: pyrocufflink.blue
expiration: 1d
inactivity: 4h
redis:
host: redis
port: 6379
cookies:
- domain: pyrocufflink.blue
authelia_url: 'https://auth.pyrocufflink.blue'
- domain: pyrocufflink.net
authelia_url: 'https://auth.pyrocufflink.net'
server:
buffers:
@@ -175,7 +199,7 @@ server:
storage:
postgres:
host: postgresql.pyrocufflink.blue
address: postgresql.pyrocufflink.blue
database: authelia
username: authelia
password: unused

View File

@@ -37,6 +37,7 @@ patches:
spec:
containers:
- name: authelia
imagePullPolicy: IfNotPresent
env:
- name: AUTHELIA_STORAGE_POSTGRES_TLS_CERTIFICATE_CHAIN_FILE
value: /run/authelia/certs/postgresql/tls.crt
@@ -57,4 +58,4 @@ patches:
name: dch-root-ca
images:
- name: ghcr.io/authelia/authelia
newTag: 4.38.19
newTag: 4.39.15

View File

@@ -22,6 +22,7 @@ patches:
spec:
containers:
- name: cluster-autoscaler
imagePullPolicy: IfNotPresent
command:
- ./cluster-autoscaler
- --v=4

10
calico/kustomization.yaml Normal file
View File

@@ -0,0 +1,10 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
labels:
- pairs:
app.kubernetes.io/instance: calico
resources:
- https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/operator-crds.yaml
- https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/tigera-operator.yaml

View File

@@ -1,41 +0,0 @@
git_repo: gitea@git.pyrocufflink.blue:dustin/certs.git
certs:
- name: pyrocufflink-cert
namespace: default
key: certificates/_.pyrocufflink.net.key
cert: certificates/_.pyrocufflink.net.crt
bundle: certificates/_.pyrocufflink.net.pem
- name: dustinhatchname-cert
namespace: default
key: acme.sh/dustin.hatch.name/dustin.hatch.name.key
cert: acme.sh/dustin.hatch.name/fullchain.cer
- name: hatchchat-cert
namespace: default
key: certificates/hatch.chat.key
cert: certificates/hatch.chat.crt
bundle: certificates/hatch.chat.pem
- name: tabitha-cert
namespace: default
key: certificates/tabitha.biz.key
cert: certificates/tabitha.biz.crt
bundle: certificates/tabitha.biz.pem
- name: chmod777-cert
namespace: default
key: certificates/chmod777.sh.key
cert: certificates/chmod777.sh.crt
bundle: certificates/chmod777.sh.pem
- name: dustinandtabitha-cert
namespace: default
key: certificates/dustinandtabitha.com.key
cert: certificates/dustinandtabitha.com.crt
bundle: certificates/dustinandtabitha.com.pem
- name: hlc-cert
namespace: default
key: certificates/hatchlearningcenter.org.key
cert: certificates/hatchlearningcenter.org.crt
bundle: certificates/hatchlearningcenter.org.pem
- name: appsxyz-cert
namespace: default
key: certificates/apps.du5t1n.xyz.key
cert: certificates/apps.du5t1n.xyz.crt
bundle: certificates/apps.du5t1n.xyz.pem

View File

@@ -1,83 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: cert-exporter
namespace: cert-manager
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cert-exporter
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
resourceNames:
- pyrocufflink-cert
- dustinhatchname-cert
- hatchchat-cert
- tabitha-cert
- chmod777-cert
- dustinandtabitha-cert
- hlc-cert
- appsxyz-cert
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cert-exporter
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cert-exporter
subjects:
- kind: ServiceAccount
name: cert-exporter
namespace: cert-manager
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: cert-exporter
namespace: cert-manager
spec:
timeZone: America/Chicago
schedule: '27 9,20 * * *'
jobTemplate: &jobtemplate
spec:
template:
spec:
containers:
- image: git.pyrocufflink.net/containerimages/cert-exporter
name: cert-exporter
volumeMounts:
- mountPath: /etc/cert-exporter/config.yml
name: config
subPath: config.yml
readOnly: true
- mountPath: /home/cert-exporter/.ssh/id_ed25519
name: sshkeys
subPath: cert-exporter.pem
readOnly: true
- mountPath: /etc/ssh/ssh_known_hosts
name: sshkeys
subPath: ssh_known_hosts
readOnly: true
securityContext:
fsGroup: 1000
serviceAccount: cert-exporter
volumes:
- name: config
configMap:
name: cert-exporter
- name: sshkeys
secret:
secretName: cert-exporter-sshkey
defaultMode: 00440
restartPolicy: Never

View File

@@ -16,140 +16,3 @@ spec:
privateKey:
algorithm: ECDSA
rotationPolicy: Always
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: dustinhatchname-cert
spec:
secretName: dustinhatchname-cert
dnsNames:
- dustin.hatch.name
- '*.dustin.hatch.name'
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: zerossl
privateKey:
algorithm: ECDSA
rotationPolicy: Always
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: hatchchat-cert
spec:
secretName: hatchchat-cert
dnsNames:
- hatch.chat
- '*.hatch.chat'
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: zerossl
privateKey:
algorithm: ECDSA
rotationPolicy: Always
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: tabitha-cert
spec:
secretName: tabitha-cert
dnsNames:
- tabitha.biz
- '*.tabitha.biz'
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: zerossl
privateKey:
algorithm: ECDSA
rotationPolicy: Always
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: chmod777-cert
spec:
secretName: chmod777-cert
dnsNames:
- chmod777.sh
- '*.chmod777.sh'
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: zerossl
privateKey:
algorithm: ECDSA
rotationPolicy: Always
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: dustinandtabitha-cert
spec:
secretName: dustinandtabitha-cert
dnsNames:
- dustinandtabitha.com
- '*.dustinandtabitha.com'
- dustinandtabitha.xyz
- '*.dustinandtabitha.xyz'
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: zerossl
privateKey:
algorithm: ECDSA
rotationPolicy: Always
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: hlc-cert
spec:
secretName: hlc-cert
dnsNames:
- hatchlearningcenter.org
- '*.hatchlearningcenter.org'
- hatchlearningcenter.com
- '*.hatchlearningcenter.com'
- hlckc.org
- '*.hlckc.org'
- hlckc.com
- '*.hlckc.com'
- hlcks.org
- '*.hlcks.org'
- hlcks.com
- '*.hlcks.com'
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: zerossl
privateKey:
algorithm: ECDSA
rotationPolicy: Always
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: appsxyz-cert
spec:
secretName: appsxyz-cert
dnsNames:
- apps.du5t1n.xyz
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: zerossl
privateKey:
algorithm: ECDSA
rotationPolicy: Always

27
cert-manager/jenkins.yaml Normal file
View File

@@ -0,0 +1,27 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
resourceNames:
- pyrocufflink-cert
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: default
namespace: jenkins-jobs

View File

@@ -5,17 +5,9 @@ resources:
- https://github.com/cert-manager/cert-manager/releases/download/v1.16.4/cert-manager.yaml
- cluster-issuer.yaml
- certificates.yaml
- cert-exporter.yaml
- dch-ca-issuer.yaml
- secrets.yaml
configMapGenerator:
- name: cert-exporter
namespace: cert-manager
files:
- config.yml=cert-exporter.config.yml
options:
disableNameSuffixHash: True
- jenkins.yaml
secretGenerator:
- name: zerossl-eab
@@ -25,12 +17,6 @@ secretGenerator:
options:
disableNameSuffixHash: true
- name: cert-exporter-sshkey
namespace: cert-manager
files:
- cert-exporter.pem
- ssh_known_hosts
- name: cloudflare
namespace: cert-manager
files:
@@ -52,3 +38,13 @@ patches:
nameservers:
- 172.30.0.1
dnsPolicy: None
- patch: |
- op: add
path: /spec/template/spec/containers/0/args/-
value: >-
--dns01-recursive-nameservers-only
target:
group: apps
version: v1
kind: Deployment
name: cert-manager

55
crio-clean.sh Normal file
View File

@@ -0,0 +1,55 @@
#!/bin/sh
# vim: set sw=4 ts=4 sts=4 et :
usage() {
printf 'usage: %s node\n' "${0##*/}"
}
drain_node() {
kubectl drain \
--ignore-daemonsets \
--delete-emptydir-data \
"$1"
}
stop_node() {
ssh "$1" doas sh <<EOF # lang: bash
echo 'Stopping kubelet' >&2
systemctl stop kubelet
echo 'Stopping all containers' >&2
crictl ps -aq | xargs crictl stop
echo 'Stopping CRI-O' >&2
systemctl stop crio
EOF
}
wipe_crio() {
echo 'Wiping container storage'
ssh "$1" doas crio wipe -f
}
start_node() {
echo 'Starting Kubelet/CRI-O'
ssh "$1" doas systemctl start crio kubelet
}
uncordon_node() {
kubectl uncordon "$1"
}
main() {
local node=$1
if [ -z "${node}" ]; then
usage >&2
exit 2
fi
drain_node "${node}" || exit
stop_node "${node}" || exit
wipe_crio "${node}" || exit
start_node "${node}" || exit
uncordon_node "${node}" || exit
}
main "$@"

View File

@@ -67,6 +67,8 @@ spec:
value: /run/secrets/host-provisioner/rabbitmq/tls.key
- name: AMQP_EXTERNAL_CREDENTIALS
value: '1'
- name: PYROCUFFLINK_EXCLUDE_TEST
value: 'false'
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
@@ -88,11 +90,15 @@ spec:
- mountPath: /tmp
name: tmp
subPath: tmp
- mountPath: /var/tmp
name: tmp
subPath: tmp
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
serviceAccountName: host-provisioner
volumes:
- name: dch-root-ca
configMap:

28
dch-webhooks/jenkins.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins.dch-webhooks
rules:
- apiGroups:
- apps
resources:
- deployments
resourceNames:
- dch-webhooks
verbs:
- get
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins.dch-webhooks
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins.dch-webhooks
subjects:
- kind: ServiceAccount
name: default
namespace: jenkins-jobs

2
democratic-csi/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
synology.password
synology-iscsi-chap.yaml

View File

@@ -0,0 +1,385 @@
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-synology-democratic-csi-node
namespace: democratic-csi
labels:
app.kubernetes.io/name: democratic-csi
app.kubernetes.io/csi-role: node
app.kubernetes.io/component: node-linux
spec:
selector:
matchLabels:
app.kubernetes.io/name: democratic-csi
app.kubernetes.io/csi-role: node
app.kubernetes.io/component: node-linux
template:
metadata:
labels:
app.kubernetes.io/name: democratic-csi
app.kubernetes.io/csi-role: node
app.kubernetes.io/component: node-linux
spec:
serviceAccount: csi-synology-democratic-csi-node-sa
priorityClassName: system-node-critical
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
hostAliases: []
hostIPC: true
hostPID: false
containers:
- name: csi-driver
image: docker.io/democraticcsi/democratic-csi:latest
args:
- --csi-version=1.5.0
- --csi-name=org.democratic-csi.iscsi-synology
- --driver-config-file=/config/driver-config-file.yaml
- --log-level=info
- --csi-mode=node
- --server-socket=/csi-data/csi.sock.internal
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- SYS_ADMIN
privileged: true
env:
- name: CSI_NODE_ID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
terminationMessagePath: /tmp/termination-log
terminationMessagePolicy: File
livenessProbe:
failureThreshold: 3
exec:
command:
- bin/liveness-probe
- --csi-version=1.5.0
- --csi-address=/csi-data/csi.sock.internal
initialDelaySeconds: 10
timeoutSeconds: 15
periodSeconds: 60
volumeMounts:
- name: socket-dir
mountPath: /csi-data
- name: kubelet-dir
mountPath: /var/lib/kubelet
mountPropagation: Bidirectional
- name: iscsi-dir
mountPath: /etc/iscsi
mountPropagation: Bidirectional
- name: iscsi-info
mountPath: /var/lib/iscsi
mountPropagation: Bidirectional
- name: modules-dir
mountPath: /lib/modules
readOnly: true
- name: localtime
mountPath: /etc/localtime
readOnly: true
- name: udev-data
mountPath: /run/udev
- name: host-dir
mountPath: /host
mountPropagation: Bidirectional
- mountPath: /sys
name: sys-dir
- name: dev-dir
mountPath: /dev
- name: config
mountPath: /config
- name: csi-proxy
image: docker.io/democraticcsi/csi-grpc-proxy:v0.5.6
env:
- name: BIND_TO
value: unix:///csi-data/csi.sock
- name: PROXY_TO
value: unix:///csi-data/csi.sock.internal
volumeMounts:
- mountPath: /csi-data
name: socket-dir
- name: driver-registrar
image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.0
args:
- --v=5
- --csi-address=/csi-data/csi.sock
- --kubelet-registration-path=/var/lib/kubelet/plugins/org.democratic-csi.iscsi-synology/csi.sock
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
livenessProbe:
exec:
command:
- /csi-node-driver-registrar
- --kubelet-registration-path=/var/lib/kubelet/plugins/org.democratic-csi.iscsi-synology/csi.sock
- --mode=kubelet-registration-probe
volumeMounts:
- mountPath: /csi-data
name: socket-dir
- name: registration-dir
mountPath: /registration
- name: kubelet-dir
mountPath: /var/lib/kubelet
- name: cleanup
image: docker.io/busybox:1.37.0
command:
- /bin/sh
args:
- -c
- |-
sleep infinity &
trap 'kill !$' INT TERM
wait
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- rm -rf /plugins/org.democratic-csi.iscsi-synology /registration/org.democratic-csi.iscsi-synology-reg.sock
volumeMounts:
- name: plugins-dir
mountPath: /plugins
- name: registration-dir
mountPath: /registration
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/org.democratic-csi.iscsi-synology
type: DirectoryOrCreate
- name: plugins-dir
hostPath:
path: /var/lib/kubelet/plugins
type: Directory
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
- name: kubelet-dir
hostPath:
path: /var/lib/kubelet
type: Directory
- name: iscsi-dir
hostPath:
path: /etc/iscsi
type: Directory
- name: iscsi-info
hostPath:
path: /var/lib/iscsi
- name: dev-dir
hostPath:
path: /dev
type: Directory
- name: modules-dir
hostPath:
path: /lib/modules
- name: localtime
hostPath:
path: /etc/localtime
- name: udev-data
hostPath:
path: /run/udev
- name: sys-dir
hostPath:
path: /sys
type: Directory
- name: host-dir
hostPath:
path: /
type: Directory
- name: config
secret:
secretName: csi-synology-democratic-csi-driver-config
nodeSelector:
kubernetes.io/os: linux
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: csi-synology-democratic-csi-controller
namespace: democratic-csi
labels:
app.kubernetes.io/name: democratic-csi
app.kubernetes.io/csi-role: controller
app.kubernetes.io/component: controller-linux
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: democratic-csi
app.kubernetes.io/csi-role: controller
app.kubernetes.io/component: controller-linux
template:
metadata:
labels:
app.kubernetes.io/name: democratic-csi
app.kubernetes.io/csi-role: controller
app.kubernetes.io/component: controller-linux
spec:
serviceAccount: csi-synology-democratic-csi-controller-sa
priorityClassName: system-cluster-critical
hostNetwork: false
dnsPolicy: ClusterFirst
hostAliases: []
hostIPC: false
containers:
- name: external-attacher
image: registry.k8s.io/sig-storage/csi-attacher:v4.4.0
args:
- --v=5
- --leader-election
- --leader-election-namespace=democratic-csi
- --timeout=90s
- --worker-threads=10
- --csi-address=/csi-data/csi.sock
volumeMounts:
- mountPath: /csi-data
name: socket-dir
- name: external-provisioner
image: registry.k8s.io/sig-storage/csi-provisioner:v3.6.0
args:
- --v=5
- --leader-election
- --leader-election-namespace=democratic-csi
- --timeout=90s
- --worker-threads=10
- --extra-create-metadata
- --csi-address=/csi-data/csi.sock
volumeMounts:
- mountPath: /csi-data
name: socket-dir
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: external-resizer
image: "registry.k8s.io/sig-storage/csi-resizer:v1.9.0"
args:
- --v=5
- --leader-election
- --leader-election-namespace=democratic-csi
- --timeout=90s
- --workers=10
- --csi-address=/csi-data/csi.sock
volumeMounts:
- mountPath: /csi-data
name: socket-dir
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
# https://github.com/kubernetes-csi/external-snapshotter
# beware upgrading version:
# - https://github.com/rook/rook/issues/4178
# - https://github.com/kubernetes-csi/external-snapshotter/issues/147#issuecomment-513664310
- name: external-snapshotter
image: "registry.k8s.io/sig-storage/csi-snapshotter:v8.2.1"
args:
- --v=5
- --leader-election
- --leader-election-namespace=democratic-csi
- --timeout=90s
- --worker-threads=10
- --csi-address=/csi-data/csi.sock
volumeMounts:
- mountPath: /csi-data
name: socket-dir
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: csi-driver
image: docker.io/democraticcsi/democratic-csi:latest
args:
- --csi-version=1.5.0
- --csi-name=org.democratic-csi.iscsi-synology
- --driver-config-file=/config/driver-config-file.yaml
- --log-level=debug
- --csi-mode=controller
- --server-socket=/csi-data/csi.sock.internal
livenessProbe:
failureThreshold: 3
exec:
command:
- bin/liveness-probe
- --csi-version=1.5.0
- --csi-address=/csi-data/csi.sock.internal
initialDelaySeconds: 10
timeoutSeconds: 15
periodSeconds: 60
volumeMounts:
- name: socket-dir
mountPath: /csi-data
- name: config
mountPath: /config
- name: csi-proxy
image: docker.io/democraticcsi/csi-grpc-proxy:v0.5.6
env:
- name: BIND_TO
value: unix:///csi-data/csi.sock
- name: PROXY_TO
value: unix:///csi-data/csi.sock.internal
volumeMounts:
- mountPath: /csi-data
name: socket-dir
volumes:
- name: socket-dir
emptyDir: {}
- name: config
secret:
secretName: csi-synology-democratic-csi-driver-config
nodeSelector:
kubernetes.io/os: linux
---
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: org.democratic-csi.iscsi-synology
labels:
app.kubernetes.io/name: democratic-csi
spec:
attachRequired: true
podInfoOnMount: true

View File

@@ -0,0 +1,93 @@
driver: synology-iscsi
httpConnection:
protocol: https
host: storage0.pyrocufflink.blue
port: 5001
username: democratic-csi
allowInsecure: true
# should be uniqe across all installs to the same nas
session: "democratic-csi"
serialize: true
# Choose the DSM volume this driver operates on. The default value is /volume1.
# synology:
# volume: /volume1
iscsi:
targetPortal: "server[:port]"
# for multipath
targetPortals: [] # [ "server[:port]", "server[:port]", ... ]
# leave empty to omit usage of -I with iscsiadm
interface: ""
# can be whatever you would like
baseiqn: "iqn.2000-01.com.synology:csi."
# MUST ensure uniqueness
# full iqn limit is 223 bytes, plan accordingly
namePrefix: ""
nameSuffix: ""
# documented below are several blocks
# pick the option appropriate for you based on what your backing fs is and desired features
# you do not need to alter dev_attribs under normal circumstances but they may be altered in advanced use-cases
# These options can also be configured per storage-class:
# See https://github.com/democratic-csi/democratic-csi/blob/master/docs/storage-class-parameters.md
lunTemplate:
# can be static value or handlebars template
#description: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
# btrfs thin provisioning
type: "BLUN"
# tpws = Hardware-assisted zeroing
# caw = Hardware-assisted locking
# 3pc = Hardware-assisted data transfer
# tpu = Space reclamation
# can_snapshot = Snapshot
#dev_attribs:
#- dev_attrib: emulate_tpws
# enable: 1
#- dev_attrib: emulate_caw
# enable: 1
#- dev_attrib: emulate_3pc
# enable: 1
#- dev_attrib: emulate_tpu
# enable: 0
#- dev_attrib: can_snapshot
# enable: 1
# btfs thick provisioning
# only zeroing and locking supported
#type: "BLUN_THICK"
# tpws = Hardware-assisted zeroing
# caw = Hardware-assisted locking
#dev_attribs:
#- dev_attrib: emulate_tpws
# enable: 1
#- dev_attrib: emulate_caw
# enable: 1
# ext4 thinn provisioning UI sends everything with enabled=0
#type: "THIN"
# ext4 thin with advanced legacy features set
# can only alter tpu (all others are set as enabled=1)
#type: "ADV"
#dev_attribs:
#- dev_attrib: emulate_tpu
# enable: 1
# ext4 thick
# can only alter caw
#type: "FILE"
#dev_attribs:
#- dev_attrib: emulate_caw
# enable: 1
lunSnapshotTemplate:
is_locked: true
# https://kb.synology.com/en-me/DSM/tutorial/What_is_file_system_consistent_snapshot
is_app_consistent: true
targetTemplate:
auth_type: 0
max_sessions: 0

View File

@@ -0,0 +1,32 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: democratic-csi
labels:
- pairs:
app.kubernetes.io/instance: csi-synology
resources:
- namespace.yaml
- rbac.yaml
- democratic-csi.yaml
- secrets.yaml
- storageclass.yaml
patches:
- patch: |
kind: Deployment
apiVersion: apps/v1
metadata:
name: csi-synology-democratic-csi-controller
namespace: democratic-csi
spec:
template:
spec:
hostNetwork: true
images:
- name: docker.io/democraticcsi/democratic-csi
newName: ghcr.io/democratic-csi/democratic-csi
digest: sha256:da41c0c24cbcf67426519b48676175ab3a16e1d3e50847fa06152f5eddf834b1

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: democratic-csi

316
democratic-csi/rbac.yaml Normal file
View File

@@ -0,0 +1,316 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-synology-democratic-csi-controller-sa
namespace: democratic-csi
labels:
app.kubernetes.io/name: democratic-csi
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-synology-democratic-csi-node-sa
namespace: democratic-csi
labels:
app.kubernetes.io/name: democratic-csi
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-synology-democratic-csi-controller-cr
labels:
app.kubernetes.io/name: democratic-csi
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- list
- create
- apiGroups:
-
resources:
- persistentvolumes
verbs:
- create
- delete
- get
- list
- watch
- update
- patch
- apiGroups:
-
resources:
- secrets
verbs:
- get
- list
- apiGroups:
-
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups:
-
resources:
- persistentvolumeclaims
verbs:
- get
- list
- watch
- update
- patch
- apiGroups:
-
resources:
- persistentvolumeclaims/status
verbs:
- get
- list
- watch
- update
- patch
- apiGroups:
-
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- volumeattachments
verbs:
- get
- list
- watch
- update
- patch
- apiGroups:
- storage.k8s.io
resources:
- volumeattachments/status
verbs:
- patch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
- apiGroups:
- csi.storage.k8s.io
resources:
- csidrivers
verbs:
- get
- list
- watch
- update
- create
- apiGroups:
-
resources:
- events
verbs:
- list
- watch
- create
- update
- patch
- apiGroups:
- snapshot.storage.k8s.io
resources:
- volumesnapshotclasses
verbs:
- get
- list
- watch
- apiGroups:
- snapshot.storage.k8s.io
resources:
- volumesnapshots/status
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- snapshot.storage.k8s.io
resources:
- volumesnapshotcontents
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- snapshot.storage.k8s.io
resources:
- volumesnapshotcontents/status
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- snapshot.storage.k8s.io
resources:
- volumesnapshots
verbs:
- create
- get
- list
- watch
- update
- patch
- delete
- apiGroups:
- storage.k8s.io
resources:
- csinodes
verbs:
- get
- list
- watch
- apiGroups:
- csi.storage.k8s.io
resources:
- csinodeinfos
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- watch
- list
- delete
- update
- create
- apiGroups:
- storage.k8s.io
resources:
- csistoragecapacities
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
-
resources:
- pods
verbs:
- get
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- get
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-synology-democratic-csi-node-cr
labels:
app.kubernetes.io/name: democratic-csi
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- list
- create
- apiGroups:
-
resources:
- nodes
verbs:
- get
- list
- watch
- update
- apiGroups:
-
resources:
- persistentvolumes
verbs:
- get
- list
- watch
- update
- apiGroups:
- storage.k8s.io
resources:
- volumeattachments
verbs:
- get
- list
- watch
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-synology-democratic-csi-controller-rb
labels:
app.kubernetes.io/name: democratic-csi
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: csi-synology-democratic-csi-controller-cr
subjects:
- kind: ServiceAccount
name: csi-synology-democratic-csi-controller-sa
namespace: democratic-csi
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-synology-democratic-csi-node-rb
labels:
app.kubernetes.io/name: democratic-csi
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: csi-synology-democratic-csi-node-cr
subjects:
- kind: ServiceAccount
name: csi-synology-democratic-csi-node-sa
namespace: democratic-csi

View File

@@ -0,0 +1,73 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: csi-synology-democratic-csi-driver-config
namespace: democratic-csi
labels: &labels
app.kubernetes.io/name: synology-iscsi-driver-config
app.kubernetes.io/component: democratic-csi
app.kubernetes.io/part-of: democratic-csi
spec:
encryptedData:
synology.password: AgC6Ai4YXYUZZ0ve8MwzeWFb5QzLbCunHOhjela/TGCzPr48evXbj6wKKVIailXS2cpD948wQ9tEX5bK3ojlMIuuzjbux0ATpTuSN81JQPbvArINp9kYu/QK2Eg46tEk6f5W1VFVC2yYQySC9+7NLJRg8qk8gGUGUMt11mRcsyJ6iBnzEt+5xwK+adQB0/pHJPGGKKcOJY9ZUCdl+Q930ZvnSvrdZNcFKH1meFww7ujQ0NBV8ABpcJwEjJhfFi3tMBKpIPrYGsSVEmHYciwK2YLyeJ/Ao7GBIBKX5lIQl0aTi40oIsc3BV2ZTmM1a2ZuuQWg33+9/r3FaU6ZdYL84B9S+W6IG893yFH+22fcArxCzjVnb8oftzrl2J/M3UZhtL4vYakHjEVMqCm2hzHjGCAadXD1cs6xiqcl4mA40KbaEojxodZJyzlNBbTi4ZN4cIaIFO8FNYnewSXtYZBIUzgdNe65k9orpmaV+qpK4Q8Cd3uZg4RQwiygBPQE9BGSJ7cBc/dCqxevuZB1F1yOetpPlQgyIN6gixt6xzefPp0VWY1I1TI3kjLSRiRGWUK1NIL4J3TIdcBsuO8OXWh0D2c+n4/dIPX9peCN8COKXMwjBm9AHDZ1ImlnVZrAxzYCTPxtGRtJVp/4pW6aDWXCA7UWPdKroipw9FUAK64knqMoV7QS7c6Kw7cz2ajvAV84O/jNkRc7L20J35z30rSncH7l1/JV0XPOZh0XWE5068TQKQ==
template:
metadata:
name: csi-synology-democratic-csi-driver-config
namespace: democratic-csi
data:
driver-config-file.yaml: |
driver: synology-iscsi
httpConnection:
protocol: https
host: storage0.pyrocufflink.blue
port: 5001
username: democratic-csi
password: {{ index . "synology.password" }}
allowInsecure: true
session: democratic-csi
serialize: true
iscsi:
targetPortal: '[fd68:c2d2:500e:3ea3:8d42:e33e:264b:7c30]:3260'
baseiqn: iqn.2000-01.com.synology:csi.
lunTemplate:
type: BLUN
targetTemplate:
auth_type: 2 # 0: None; 1: CHAP; 2: Mutual CHAP
max_sessions: 0 # 0: Unlimited
chap: true
mutual_chap: true
lunSnapshotTemplate:
is_app_consistent: true
is_locked: true
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: synology-iscsi-provisioner
namespace: democratic-csi
spec:
encryptedData:
targetTemplate: AgDWYcaVFVlqHO1XCH0d0Okz2RHt1R1pc2ygTtP4ZYuc44IdfERzgGh9CWNbjY+Qf5K7kF4TOIwgRs1MLumaUg637VO7SYCl1kwWV6pZ/g4bLX1FGTl+XFIAH53EKxDD5nC9fl3VG46IA+dYBPoWFb0UYoI09eWHUg7vTRk1/0MTu19UPkc6VafhFXTfVNiUykF+264Ck3I9i9hMk3Buf9+E4qLHeyyfpMob7IRpdkz+ONYYrxHOGrwDgqFwcyiyliIYWjmOh/FV6kolffeNgSkXpWNNrLQOkkSOwUF6DalKiZd16nzLrvzKFWuDcdcRqxBBKaMUF/JK4BAkfi+MNRTaceCmoSkS21gVLbATb0L3Z7JaifdqRInPNMbkFYs+wILkozyJX0JANg4kuMBCW8GfPEMj9ck21dyeR+ucXcIc67GYS7L92d/ITZd2SWT6a6LRT1vBvroE8ybVf+3oOPUOtaSiZsZpNan2DO4kk/1ZD6clBvn5Cz0BqbVQxwwPSuGkvFXNpDX+xliN+QkohnWDQKi4cMvUqVUG7MyfbaCiGyXcH7enYxccBvIVVy6rXWXDtkzP4B30KeO7rfz0eDn7f+zYZPpwFE6TIorCNe+5zNC0uDwMKf7Csz3x78ZxXtYpdPpFpnboP5zWXhKZY4EfgiyV1HDoSAkzfC4zQV26DnH1nfN9OjYwNUdc75tw7VYuSWS8cEH71E9DdvcJv1XL0f7D5uV5RlGQP3sonWhFi73dLuqCNNwHtEOIV3XxJvN/gaoDRvQQMTosWK5pOs3CpiBGq+EYoM5KWZZhp29axSQ3NRefGoLwOsbeEg==
template:
metadata:
name: synology-iscsi-provisioner
namespace: democratic-csi
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: synology-iscsi-chap
namespace: democratic-csi
spec:
encryptedData:
node-db.node.session.auth.password: AgC2kAta4SiM1EcDAVdenwtKLGTrDJte1qSK3W8WO80zKycj54KBlWMvCLzala70/e1e26iRUWotgwUrrPQpYoAspR9kHIaGFccZvVh+JWJY1OIfCryvumN8jGk0ynN5IMo/ZTFTEE76XrEckvnRrZIcqb9K1RlpS5jRPA8vXDGBofXG4in/scZ08SNIsrELfcKvIsSZsYQF6aY3cQ+rcqePJBBG+5hDsu6qsBoib1tsfe2DoCZ2bYRu5BJFD6CneUTCcVvoBBYmMX+nS2Cvfz8OK+5H42q0W3Gzg6VJYrTsQ1PN5LLg5+xk6ENovw/PtohiwvP6RV9I2r6ev+RrrlWUPlC7XOD0qm2nUeG8D8J+9aP2prQ9zFgpuIZ//gIKKvMSsSJRZvSkbjJ6nrC9ViVmaC65dx89bYjr5vB+7Dm5ytQ6ars5rEpEexiPgGiobnCosn0wwMN1WP97VcpW8udqsticLSBFtANUUtnW2yZMRuVh7a2MutnlUXgODG+ZgkNaZE0cELfOynkhaf10aByuXIN1NX9VIdzfIbdZaIKvr0xKQSdKnbszoHQNOZieE4mXZ/w0BiiDilvyqtLP+3Maa+mFIjoo17zlf8PJD1dEUFckNUBJifVIV6vOHCKXO565IxyE9nNokXR7xkd6XEd3qefoV+9IpOt0NEkHehbSZzUy3JpyNbicdF1WyXb/uY9riacLptssZ4LF8eQ=
node-db.node.session.auth.password_in: AgCMw56LgARt7/dn2WhebQIv+uNLkGUxBkgbYOw+Po9eqgk622F7Y6pVWRAwicdPBM2cjnSrGjPO7nzgXhD0GIbW44WyvwM2w+n5klWmSC9prK+Orup3TMty2hKnSMLOR3rIfpUiRJ0NFvGkTvPzQ/ZDX3O4c88oG6UGVG3B4bQu6Kn5GJ5is2XAnh2dipBx18kLpEmL3hMMqpAy2x0qyf8vJxy39ZvAntk69ziliumqpxePecvbLPkkh2A1jwZR0guBDvBiksvoOyh+P7hTxj3ioVC3HZ4+i52tuvfqugo+INqKJfr15k6fA2cTFEHJ8kwkPtFQCA3bbvRAbcjl0malOIqBFBFwbJYvcauXZGP9m1uoMRni3FHn+1YkBdsvSnw66aYHc4gjN8VrLSziYH72TH8XJ6jEikeK5+nCN2+uhC+AetEUFcLCNM7sKXlS7pzIOQiZ3oB7FcQrsSUkt1Zjax5F6i0reRTdZd/qPLvt65NFwjG/a3yMLf141aHSRog+HGugm4/1A2USGmURmwGSVwAjfrK7b/dj3tMOG8BI4vVJ0UCyw65v0R9h4VEORyr4sXTgNx2+5HewEskDt3LyMzmw4Y6Sw2ftZmQxNEsSy+8BEF4zZj6foIAGuLShjI+4BR9aGnX4maL7IjR6cmj6qwinybfFYAMSx23Icw/aXUBgJ6Slgnd6l96g2RWcNGDxWM8Wq6p2W9VHvDY=
node-db.node.session.auth.username: AgBXplj0SXTinhqbpu5SvYJheYt9G4YNGE99UIi2F0n5QrCI7zvuuSQvA8EKCS5LQni+Og/wToJs1wLeUX4OstlQd3OvkpFDD+jrPVUDv04tlSeNJmaMrQe1pNk04GiLJKeDRRkG+9eTYSIKMsDLroofjHgiRH5wsBh0ncWDW1v5cNlpgq3EzgEQiKnL5zIPIXlHKkadZ9cvebtGoW7mGEnPI/QSnurhVfzEWCXCilxvyNDnBNIKK1rf79eDg1+ZecA0bvE2d7d1cfLhKG+Hd7JcRI0fxii+u1KTCBqbl6goCiCUi5KBfCMP45m7DTyMMPNSfsx9WVjR3ueEXucRGIfhTrV5Zo5Y+WY2c4MoW9XDw0JG/zzHJAOzd9CYk2b6EgEhJLXyHdhNp3JfN4lBpbM6r8RIoQTRImLH0BxytIXQ8kzMtJdkYt2rjV4ZR/fQB9UzGYBtLgWTrNbA+PgEBDB5nlVzbCXZ6uxfRadc2jv2fjGvzidIsfFOicrxWTQtnwSqbs8XAOydHU3Kk7Hrv8k22uaFETcz/tZI619wQL63SmA2igM0fBZcuc64Lx6wmzQBFA9CNKVuPHKFdPXM3s4GzrLqKMskAmDpYvtSlvSqsE2nv6sObS8Iyzm4o69V9+ma2LGD5bl6i7L2wiLlgvc8Ef+YviVzn8lVYqdKCce6F/5TQKNzvbdnJ0bJn6Q01CVHlYqbnyworsmf
node-db.node.session.auth.username_in: AgCT8KR/4GNoDa/TIv6YykoDaGKIP5yXkC/krWFYU5lBMSc3DreECmmow88/5xB4v+5dVt9eE7bJkgPqsUVNXlzDXpSSB/TS2iM/3sAd4ZHzZroTLIf+0QnDC2ZrybokcdmCjkFUgnDzJ9Vs+GqjUjL97LHPbTMc8ONwgiy6YmKLpc11V+JxWqSsKwGPM9ObdmI9rh/IZa19sksh86va3oqjDfElXEwKFkztV1f/NHCsWsuuov/Ku6Lisk5X0JIMKPTUUza0q3tZlJ/NotxNydHef+PA9R648XURQs/xp/hzrdttuMzxo7gT0YEsr8y9h7xlTPlR8we7/igjUMmS+ORRafg5m6PpHWanDxtHafhw9wfmvh0wEgXjC8Sz6Ub3Q9idBlHock60h+uyfsdlP3A2qMjdUXr0dFNBwXcGTaM/n5T18gO05/JSUv7CEdiuSlMnPjYzChAHDSCzxblk8CRDTcSjsSMvVBPjr5L+KQqGj3f6mm3lQnPwzXprS0//SsehRReAvbX5eGfd8Bu8nhRRtgEXvLqQdC7WxbWe0QjwB5ZRHt/4v5N1K8TXo8h6iZ6fcEtTfloMH07TitdwdYQm4uG7dfA7PA9KuqDs+R+phGFGWuzq1cMtp+hOJ6XpFgGyVhYAL/lyl3DddT1o9o7UhDCi4w7nSyxVamwyaGuUsF3lX2TyGVPjdGN1D5dlhRJ8YSPMDWOrZw==
template:
metadata:
name: synology-iscsi-chap
namespace: democratic-csi

View File

@@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: synology-iscsi
allowVolumeExpansion: true
provisioner: org.democratic-csi.iscsi-synology
parameters:
fsType: xfs
csi.storage.k8s.io/provisioner-secret-name: synology-iscsi-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: democratic-csi
csi.storage.k8s.io/node-stage-secret-name: synology-iscsi-chap
csi.storage.k8s.io/node-stage-secret-namespace: democratic-csi
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: synology-iscsi
driver: org.democratic-csi.iscsi-synology
deletionPolicy: Delete

View File

@@ -1,20 +1,3 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynk8s-provisioner-pvc
namespace: dynk8s
labels:
app.kubernetes.io/name: dynk8s-provisioner-pvc
app.kubernetes.io/instance: dynk8s-provisioner
app.kubernetes.io/component: storage
app.kubernetes.io/part-of: dynk8s-provisioner
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: StatefulSet
@@ -70,8 +53,7 @@ spec:
serviceAccountName: dynk8s-provisioner
volumes:
- name: dynk8s-provisioner
persistentVolumeClaim:
claimName: dynk8s-provisioner-pvc
emptyDir: {}
---
apiVersion: v1

View File

@@ -32,3 +32,5 @@ MAIL_PORT=25
MAIL_ENCRYPTION=null
MAIL_FROM=firefly-iii@pyrocufflink.net
SEND_ERROR_MESSAGE=false
ALLOW_WEBHOOKS=true

View File

@@ -66,6 +66,7 @@ spec:
containers:
- name: firefly-iii
image: docker.io/fireflyiii/core:version-6.0.19
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: firefly-iii
@@ -127,6 +128,7 @@ spec:
spec:
containers:
- image: docker.io/library/busybox
imagePullPolicy: IfNotPresent
name: wget
command:
- wget

View File

@@ -16,13 +16,12 @@ resources:
- importer.yaml
- importer-ingress.yaml
- ../dch-root-ca
- network-policy.yaml
configMapGenerator:
- name: firefly-iii
envs:
- firefly-iii.env
options:
disableNameSuffixHash: true
- name: firefly-iii-importer
envs:
- firefly-iii-importer.env
@@ -36,6 +35,16 @@ patches:
spec:
template:
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: firefly-iii
volumeMounts:
@@ -55,4 +64,4 @@ patches:
defaultMode: 0640
images:
- name: docker.io/fireflyiii/core
newTag: version-6.2.9
newTag: version-6.4.9

View File

@@ -0,0 +1,61 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: firefly-iii
labels:
app.kubernetes.io/name: firefly-iii
app.kubernetes.io/component: firefly-iii
spec:
egress:
# Allow access to other components of the Firefly III ecosystem
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: firefly-iii
# Allow access Kubernetes cluster DNS
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
# Allow access to the PostgreSQL database server
- to:
- ipBlock:
cidr: 172.30.0.0/26
ports:
- port: 5432
protocol: TCP
# Allow access to SMTP on mail.pyrocufflink.blue
- to:
- ipBlock:
cidr: 172.30.0.12/32
ports:
- port: 25
# Allow access dch-webhooks
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: default
podSelector:
matchLabels:
app.kubernetes.io/name: dch-webhooks
# Allow access ntfy
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ntfy
podSelector:
matchLabels:
app.kubernetes.io/name: ntfy
podSelector:
matchLabels:
app.kubernetes.io/component: firefly-iii
policyTypes:
- Egress

View File

@@ -0,0 +1,87 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
labels: &labels
app.kubernetes.io/name: fluent-bit
app.kubernetes.io/component: fluent-bit
spec:
selector:
matchLabels: *labels
template:
metadata:
labels: *labels
spec:
containers:
- name: fluent-bit
image: cr.fluentbit.io/fluent/fluent-bit
imagePullPolicy: IfNotPresent
args:
- -c
- /etc/fluent-bit/fluent-bit.yml
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- CAP_DAC_READ_SEARCH
volumeMounts:
- mountPath: /etc/fluent-bit
name: fluent-bit-config
readOnly: true
- mountPath: /etc/machine-id
name: machine-id
readOnly: true
- mountPath: /etc/pki/ca-trust/source/anchors
name: dch-ca
readOnly: true
- mountPath: /run/log
name: run-log
readOnly: true
- mountPath: /var/lib/fluent-bit
name: fluent-bit-data
- mountPath: /var/log
name: var-log
readOnly: true
dnsPolicy: ClusterFirstWithHostNet
securityContext:
seLinuxOptions:
type: spc_t
serviceAccountName: fluent-bit
tolerations:
- effect: NoExecute
operator: Exists
- effect: NoSchedule
operator: Exists
volumes:
- name: dch-ca
configMap:
name: dch-root-ca
items:
- key: dch-root-ca.crt
path: dch-root-ca-r2.crt
- name: fluent-bit-config
configMap:
name: fluent-bit
- name: fluent-bit-data
hostPath:
path: /var/lib/fluent-bit
type: DirectoryOrCreate
- name: machine-id
hostPath:
path: /etc/machine-id
type: File
- name: run-log
hostPath:
path: /run/log
type: Directory
- name: var-log
hostPath:
path: /var/log
type: Directory

View File

@@ -0,0 +1,25 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: fluent-bit
labels:
- pairs:
app.kubernetes.io/instance: fluent-bit
includeTemplates: false
includeSelectors: true
- pairs:
app.kubernetes.io/part-of: fluent-bit
includeTemplates: true
includeSelectors: false
resources:
- namespace.yaml
- rbac.yaml
- fluent-bit.yaml
#- network-policy.yaml
- ../dch-root-ca
images:
- name: cr.fluentbit.io/fluent/fluent-bit
newTag: 3.2.8

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: fluent-bit
labels:
app.kubernetes.io/name: fluent-bit

42
fluent-bit/rbac.yaml Normal file
View File

@@ -0,0 +1,42 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit
labels:
app.kubernetes.io/name: fluent-bit
app.kubernetes.io/component: fluent-bit
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluent-bit
labels:
app.kubernetes.io/name: fluent-bit
app.kubernetes.io/component: fluent-bit
rules:
- apiGroups:
- ''
resources:
- namespaces
- pods
- nodes
- nodes/proxy
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fluent-bit
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluent-bit
subjects:
- kind: ServiceAccount
name: fluent-bit
namespace: fluent-bit

View File

@@ -0,0 +1,14 @@
apiVersion: 1
datasources:
- name: Victoria Logs
type: victoriametrics-logs-datasource
access: proxy
url: https://logs.pyrocufflink.blue
jsonData:
tlsAuth: true
tlsAuthWithCACert: true
secureJsonData:
tlsCACert: $__file{/run/dch-ca/dch-root-ca.crt}
tlsClientCert: $__file{/run/secrets/du5t1n.me/loki/tls.crt}
tlsClientKey: $__file{/run/secrets/du5t1n.me/loki/tls.key}

View File

@@ -594,42 +594,6 @@ global_api_key = -1
# global limit on number of logged in users.
global_session = -1
#################################### Alerting ############################
[alerting]
# Disable alerting engine & UI features
enabled = true
# Makes it possible to turn off alert rule execution but alerting UI is visible
execute_alerts = true
# Default setting for new alert rules. Defaults to categorize error and timeouts as alerting. (alerting, keep_state)
error_or_timeout = alerting
# Default setting for how Grafana handles nodata or null values in alerting. (alerting, no_data, keep_state, ok)
nodata_or_nullvalues = no_data
# Alert notifications can include images, but rendering many images at the same time can overload the server
# This limit will protect the server from render overloading and make sure notifications are sent out quickly
concurrent_render_limit = 5
# Default setting for alert calculation timeout. Default value is 30
evaluation_timeout_seconds = 30
# Default setting for alert notification timeout. Default value is 30
notification_timeout_seconds = 30
# Default setting for max attempts to sending alert notifications. Default value is 3
max_attempts = 3
# Makes it possible to enforce a minimal interval between evaluations, to reduce load on the backend
min_interval_seconds = 1
# Configures for how long alert annotations are stored. Default is 0, which keeps them forever.
# This setting should be expressed as an duration. Ex 6h (hours), 10d (days), 2w (weeks), 1M (month).
max_annotation_age =
# Configures max number of alert annotations that Grafana stores. Default value is 0, which keeps all alert annotations.
max_annotations_to_keep =
#################################### Annotations #########################
[annotations.dashboard]

View File

@@ -60,6 +60,7 @@ spec:
port: http
path: /api/health
periodSeconds: 60
timeoutSeconds: 5
startupProbe:
<<: *probe
periodSeconds: 1
@@ -76,6 +77,8 @@ spec:
- mountPath: /etc/grafana/provisioning/datasources
name: datasources
readOnly: true
- mountPath: /tmp
name: tmp
- mountPath: /run/secrets/grafana
name: secrets
readOnly: true
@@ -96,6 +99,9 @@ spec:
- name: grafana
persistentVolumeClaim:
claimName: grafana
- name: tmp
emptyDir:
medium: Memory
- name: secrets
secret:
secretName: grafana

View File

@@ -28,6 +28,7 @@ configMapGenerator:
- name: datasources
files:
- datasources/loki.yml
- datasources/victoria-logs.yml
patches:
- patch: |-
@@ -54,3 +55,7 @@ patches:
- name: loki-client-cert
secret:
secretName: loki-client-cert
images:
- name: docker.io/grafana/grafana
newTag: 11.5.5

3
headlamp/headlamp.env Normal file
View File

@@ -0,0 +1,3 @@
HEADLAMP_CONFIG_OIDC_CLIENT_ID=kubernetes
HEADLAMP_CONFIG_OIDC_USE_PKCE=true
HEADLAMP_CONFIG_OIDC_IDP_ISSUER_URL=https://auth.pyrocufflink.blue

23
headlamp/ingress.yaml Normal file
View File

@@ -0,0 +1,23 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: headlamp
labels:
app.kubernetes.io/name: headlamp
app.kubernetes.io/component: headlamp
app.kubernetes.io/part-of: headlamp
spec:
tls:
- hosts:
- headlamp.pyrocufflink.blue
rules:
- host: headlamp.pyrocufflink.blue
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: headlamp
port:
number: 80

View File

@@ -0,0 +1,44 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: headlamp
labels:
- pairs:
app.kubernetes.io/instance: headlamp
app.kubernetes.io/part-of: headlamp
resources:
- namespace.yaml
- https://raw.githubusercontent.com/kubernetes-sigs/headlamp/refs/tags/v0.38.0/kubernetes-headlamp.yaml
- ingress.yaml
configMapGenerator:
- name: headlamp-env
envs:
- headlamp.env
options:
labels:
app.kubernetes.io/name: headlamp-env
app.kubernetes.io/componet: headlamp
patches:
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: headlamp
namespace: kube-system
spec:
template:
spec:
containers:
- name: headlamp
envFrom:
- configMapRef:
name: headlamp-env
optional: true
securityContext:
runAsNonRoot: true
runAsUser: 100
runAsGroup: 101

6
headlamp/namespace.yaml Normal file
View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: headlamp
labels:
app.kubernetes.io/name: headlamp

View File

@@ -91,8 +91,8 @@ notify:
- platform: group
name: mobile_apps_group
services:
- service: mobile_app_pixel_8
- service: mobile_app_pixel_6a_tab_jan_2024
- service: mobile_app_pixel_8a
- service: mobile_app_pixel_9a
- name: ntfy
platform: rest
method: POST_JSON

View File

@@ -52,6 +52,16 @@ spec:
app.kubernetes.io/name: home-assistant
app.kubernetes.io/part-of: home-assistant
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
containers:
- name: home-assistant
image: ghcr.io/home-assistant/home-assistant:2023.10.3
@@ -74,15 +84,11 @@ spec:
failureThreshold: 300
periodSeconds: 3
initialDelaySeconds: 3
securityContext:
runAsUser: 300
runAsGroup: 300
volumeMounts:
- name: home-assistant-data
mountPath: /config
subPath: data
securityContext:
fsGroup: 300
hostUsers: false
volumes:
- name: home-assistant-data
persistentVolumeClaim:

View File

@@ -18,6 +18,7 @@ resources:
- zwavejs2mqtt.yaml
- piper.yaml
- whisper.yaml
- mqtt2vl.yaml
- ingress.yaml
- ../dch-root-ca
@@ -44,6 +45,10 @@ configMapGenerator:
files:
- mosquitto.conf
- name: mqtt2vl
files:
- mqtt2vl.toml
- name: zigbee2mqtt
envs:
- zigbee2mqtt.env
@@ -116,16 +121,45 @@ patches:
- name: dch-root-ca
configMap:
name: dch-root-ca
- patch: |-
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mqtt2vl
spec:
template:
spec:
containers:
- name: mqtt2vl
env:
- name: SSL_CERT_FILE
value: /run/dch-ca/dch-root-ca.crt
volumeMounts:
- mountPath: /run/dch-ca/
name: dch-root-ca
readOnly: true
- mountPath: /run/secrets/du51tn.xyz/mqtt2vl
name: secrets
readOnly: true
volumes:
- name: dch-root-ca
configMap:
name: dch-root-ca
- name: secrets
secret:
secretName: mqtt2vl
defaultMode: 0640
images:
- name: ghcr.io/home-assistant/home-assistant
newTag: 2025.2.5
newTag: 2025.11.3
- name: docker.io/rhasspy/wyoming-whisper
newTag: 2.4.0
newTag: 3.0.2
- name: docker.io/rhasspy/wyoming-piper
newTag: 1.5.0
- name: docker.io/koenkk/zigbee2mqtt
newTag: 2.1.1
- name: docker.io/zwavejs/zwave-js-ui
newTag: 9.30.1
newTag: 2.1.2
- name: ghcr.io/koenkk/zigbee2mqtt
newTag: 2.6.3
- name: ghcr.io/zwave-js/zwave-js-ui
newTag: 11.8.1
- name: docker.io/library/eclipse-mosquitto
newTag: 2.0.20
newTag: 2.0.22

View File

@@ -0,0 +1,11 @@
[mqtt]
url = "mqtts://mqtt.pyrocufflink.blue"
username = "mqtt2vl"
password_file = "/run/secrets/du51tn.xyz/mqtt2vl/mqtt.password"
topics = [
"poolsensor/debug",
"garden1/debug",
]
[http]
url = "https://logs.pyrocufflink.blue/insert/jsonline?_stream_fields=topic"

View File

@@ -0,0 +1,43 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/component: mqtt2vl
app.kubernetes.io/name: mqtt2vl
app.kubernetes.io/part-of: home-assistant
name: mqtt2vl
spec:
selector:
matchLabels:
app.kubernetes.io/component: mqtt2vl
app.kubernetes.io/name: mqtt2vl
template:
metadata:
labels:
app.kubernetes.io/component: mqtt2vl
app.kubernetes.io/name: mqtt2vl
app.kubernetes.io/part-of: home-assistant
spec:
containers:
- name: mqtt2vl
image: git.pyrocufflink.net/containerimages/mqtt2vl
imagePullPolicy: Always
args:
- /etc/mqtt2vl/mqtt2vl.toml
env:
- name: RUST_LOG
value: info,mqtt2vl=debug
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/mqtt2vl
name: config
readOnly: true
securityContext:
runAsUser: 29734
runAsGroup: 29734
fsGroup: 29734
volumes:
- name: config
configMap:
name: mqtt2vl

View File

@@ -36,6 +36,16 @@ spec:
app.kubernetes.io/name: piper
app.kubernetes.io/part-of: home-assistant
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: piper
image: docker.io/rhasspy/wyoming-piper:1.3.2

View File

@@ -7,7 +7,7 @@ metadata:
namespace: home-assistant
spec:
encryptedData:
passwd: AgB2pNlStPqi5gg6OMCcEg9RIyZms+w/MP20viWdi18vlvLHUXGvNASY2YJJ7r9S+OwI+cd9x+v4Bd3uVWwaivoPOUn+fFlKuY9rljal2+WMc99NFbZNXzZ798FMtd+z1jsRG/be0P+1TnpZXp/yxZIfGw+SHuiPtn713X+T9aqgY5HHMckwgvMweQuPLggDxcu/mwlHA4mZLh0up3PekvIRhsvZT0QaFOgBqtSEb0gJauiZj+QAcKHeQR9SeRNr8OI9O/n+coOgoXFxKVJZN5ftja5bmMXhA4zo6i624BMtYQLPBiCkaHt/Xg9/m38JDuyWaMWRv4QyPPkEp1RvxheMIQMW8CwDfaH9rtjnRy9Apbt0CsRFSabFcVPxIEBqwp6twWm8GbCcosLvIPpjYBZzW1R/2VdvMF0D57+UvzX9/yMNiA+d1sl6v7ffsrqirWUPDeCLOYP6EA5k3OhUzJBeZAG5SC+8v0IfW9Q32yClwYyeyVKn4osaHlPbgH61jocKhfQq0a7JINauhjW3fHYB9GM9WitxSMLNEEhVOpRsVCb+vMGMEtPsZiLzjEe8FXsi09kgREQ//lGEFWD8mLXZvusTsqFlxFF1YWMpC2tqMOxC4Gui9U7bQo1qklejFf0xkFnlgJz2mTK8V4mk1ZCXMmtk44MNojnj4sweFLaSECxi7ISy8t4zCwbIJiIbumMvYkMleBMhV1Bd1LggYFYQ3qWzl6dhShPRgnGIJkf96yK/2rq7XW3NstIaN7LnjSNzlY89EQ2ctUHrRjFE3N4Y42Y/8Hpd003IzX8SrnbitMBphxTzzHxbTGs9yGlGQUtvttzenQblUgylystYELR/647nDGnLM/D1Xfpo2BvckyN0s1tNOzMgqOPAYbrh4K/nIfzEK0z2NX8j5crlhen+DVARK6AmTaprZWMksN20b5Ict/FiZddA3fBYB8Nj55S5Mal4yQxIbVITPhBBPB+58Qw+VeaMb7Jaqbt2KNHJDvgLgSVYnXVqlyCgl9ONv6iiR9sAqAvY6wZ5Rm6xeIJGExfshoRRZlyFCPZneNMm/kTTeINMNL5/kUXMCpCmsaD0iV9VTG7kg0QbFa2zXGsPtRwFffv09E2gtDTtYd1X7ChG5Sv9ek/owBUIpNWsHjw3Cxb8IegBBIQAxR4T6f2UP1trja+zS1ARdKfer8HTgxKLsk/baMFckdyqa45TUqz+DSgnG3lIeOq3HcLqJ6OWC9GHs661/3SGxRJL0QVY9iwQjZqg9bYG6ULvdCbKe85HMyuz/8yPh+Qrhth8EsW5fydIcwO9OHLKsCYWD0Nv3cUDGQmT2FLFph6xbC6XGLi0ZvqU3zyqRLbmbaRglU+jjBh6kXSvUjWqYFwS7QuyLdGtFnX0qB5rQOLjAGlDfaTHUooUcRYKw9pNuHqaoTLC+dqwXiGvnxEbwGvWC/Y1f0y4qCM5mMdfd1V8vjJ7GZs2+3+8B3x7Rbwh3dn66nkknoPLFSkx6XCmNV2dm4beU111IahBO0I2YUKCTyczVPusz0AMQbvPJoPXFLn4IRTKTN1vWNigMFP9hZk+DI2yeX0RWq8Rb9rBF949PY/v2VoCe7LW02zzkjd7OT/Ws3p5PQNGtOcutsMJcq78RhET311s7cw+Wv5ivd1DGIHzR3mqqeUvS2LgS4dnYyqIaWvqXkV1Hh8cv96T/bDadFc6hIesrX5/wALzehNlJDeeKlxk/iYemypZ/ACpFr3385LQ6SGEsO0XKnb1hAxV09H086EPCsTniEaitlmZdl0CWqTq21eMERCvOzfGlR0WTc4NuFJU00n9Pr3z7UvRDiHzqO4fcH7BCeokvNek554bepU1iE1sKyZ/QoXSPjQ+W+K1YwyVcPaAoSF5NhyDteKTltD31qejNWWFvBqgiwKJWVP3dy+lHkHKyKdjy9y9sbbcs4ljVqPXllsju7RnEF02VbNRKE2li6drjNsJ/9+/QIo3vE2fnObOQ9441ftwA0IEFLM2/um8itM0pZGZCh6zHAYRychgx3RF1cJXF+DOnnSDmef3A84FNyqi5AVbwLRXQ7nF+U9CGGTYelcZ6qoAMj5hdn0t00UWEuTa9r6G2B7R4lUNEQQRnhR+iFHZW7Xg7zKfI/k6x4zqR9H9Gjwqf0vkwc7Zu5oG2vvjK2L+2zo=
passwd: AgAtHK4uabM+TDpKukHpZ7c7RnlmAMBoJQnS1cozg0gFEXgmAdDJKWHymifmgbeJrYdVPNYTVn6lCjsllx+zzHYNrQrFShAhrDZwzel2ElYtUjp7u2M1tnqgmaGk3E1pSBdH0pwzMcB7yPDBXI9hadeurL1+dk5OmqAoQfpjlYmkT7wqAp21qH3Y7fTehpXZZ3XojrM60fwyaCKBWwQCjablR/20BuoJMmXXVN35L8Q2OH4HWDY8WRteTZaa+zL/lozrR/BW2/T45X5o6oCmukk30OSGbb16aeIIasB8tBKjYm7L/EtWY42QLPyV6LZDKFtH/1XD737nYzoYYi2GfiYboQJqfYZlP5ElhxMP+azFHr2eD1Ild7yxSNiGMtnbsd9VSCUywtbKqDvNEA4C5FXrvNlfcoq6LCyrKhFFLDPuv9BkN6NlB/eYGsSkVTwEgai3j7FvxFIgEAY1Hzr9Tn9/g+lMoEgHKgjiRBMRkn3HcB3K16U9QN1gSaQpLdlW7uG7o4UoerFQ5p1NW5pyja+kOFPb2YBv/2O1C6izSCt+i99zy3iwJGXEHMEGkA4Ag1Ti/sRxRZj7QsBhYcoxgB1GLhvB9TldQqOExCo/B08/t32EwuSQaHgjWGIeJjrY+g1w0C75ya6me1GQ7K8yIWcHmyo34LyOMkwdH9URgFgldNn0u4CEDyZR9ujhf1UEUxkgG6Pj5SUA7Exn1cIM/H472NU3G/uyCPWGLEeahFAutLkniagGA4t+Rb/IF3ZEqm/arpEGcjlaW9otOEBVJzal5OPpjDgfSbt4rUeHm8z2xLwvV6kYmLZRAH5SHZruS7mk92SjKZKD4BroZ7nJKYDksWmidpDVpmVcfBNHfL/Zs859I623aYVXj4mrU3LWaZ6DkG94hQRPsp+7zHEvx78+prHoIld/Zf8YOqFli0G0Cb5LtZVHIOVT6g81uvYjoodNdX742H0exfHci1UCpU6PIqBBSeBjppoUIoHjjFSUWWm6+bZsRTciaBrkt3YvzBTE3cdCfxfF2Y7A6wY98lVKpOEBGhtXNi+pXq3x5OhQBl2EZ1iBJBP2DN+qv6ymdnZlQ333+E1R3mn+NgrxVyHFSVRe+q63vETfGWmQN48LD73dl0ck6ER/bLxV3vl7UfWf2h1tG2lY3FHKdmMOpXoYJPEd8XDyDkSjf4ZruFyloD9QOMf0jXphrlYOHqhqNHAV86OcA+ggvJXkKuIw0ZZPsj7gux7xsiZ5rDtZUbnRLkss4e+XStBqHc80cnw0kKiVqCpJjZ+9BG6ntPqfACXnAEOL1dcPkicsB8lzfeTfK2JcZ1U+VAQm2rnsT49Er5GaU1wPBWoNJIYedfWpkwkXYRAWPkOdAoWEvFRNrwJx/O9vsMKTMcGCno3keMYSc/gzv/F9ADE7IQ7WFr+RZGKMVnoYyztFLSH0E5qhmP3DpnMZDizZaxwZhEyFhNdaWsxNC4vX3Hw/R1wkx0dZYGpqxECRpEkHuBwSrOTrJMKjSX/jOHdeO+n910Eigb731ivEiDpUH1RtYFK8XiqhxgowZaFCIuSMdMIYzg/PyxGWInRBnzhvr8Z9UNM+gXbo/3i2J9s718AnIbTacFHz+GEW/yAupWsZcyh7mwl+X3EMPMR23moXHOKLljxWeZs4NTpvr+htY9NgGicLFitpu0OHXM+e9V4UfeCTtDD1Kf4/SQDB9n3Qu/T5wv2VHbzxn4fYW3pxFThAA+DPb2UTVUSfbKlRi2hbdlBHQ2FJMQCdIFxkHjSLHDZBtuYGsPux6ekEsKck3AzorHXkBMsWtY0QUO1l4OXPoqnAOSatcxzKThvPIftK/4up/9ncyH2PQ++OlPiDbDkI5k5tWbWc4KAFEkCEmowcx4PcwYGbMMnsITDhNVL+/mf2nbLeQXZgBGKgexbYoj9EBa76YF1yXSv+awx1jXj3y3Ut5axlE5pONJRegmYeP7R5OLSPtb0baUFbHLtXmKc2ITOihSI9dTTh842l5mEJq9wo6LigQI0sRiF0ERidC5fiXcJly481R8LM3K52C9h7uQg0CnkBAJ8bgPQzDk3sMhWoLZaqBPm6G0HyoYl/uv89ikrquluPA6qRlDJu93k5IwRosM7hGReEIuSHMfV3n8DBMNC/SN0xgEkASK/9lbeURWC596kpcarHLekSpcgcmBAJfKbtI1IGGD2OcLO6qq0rlS65DS66dwbhIyE0VsTeG4VcJbDOujtQVXP6xps1RY6j3hcWu+QlWFlmZa2qx1zMl0eWoh0gs2QC4uBTKQubAt4oT/o/djf8SH3scytMl+qEwXzY/5UOz7gAkj7f5QlPygmJijqh0GvWhaZeftocaIiQmbjPqAznJkGaavbgO6H2QlUy60N9Vu+PXmP86X/tGKWS/TiDm9fSAITFbKIql18DqvpyNXgDSm8M9BKEgBTj7CMeynmFurbjcbv3+SrxQRiX0xaDVmnKyMT3m+vNRvIK0UmSi3856Q==
template:
metadata:
creationTimestamp: null
@@ -32,3 +32,27 @@ spec:
metadata:
name: home-assistant
namespace: home-assistant
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: mqtt2vl
namespace: home-assistant
labels:
app.kubernetes.io/name: mqtt2vl
app.kubernetes.io/component: mqtt2vl
app.kubernetes.io/part-of: home-assistant
spec:
encryptedData:
mqtt.password: AgBOYdOxapXUPTAtiKaHDIrY1yo9IFBP2CtcuLy66jl7kBvhlervt2Xru+AWoapTVcZ3Jj4VgfKwiEJVw+g9Zn6xyklNobCkmT4XREnjSxtVDSDRRVDF/uIOqEWLldKRwXPldjDw5OYzTB8/P1e/ndiDV5InmbIcsvGRsSd+GG9CVy/toK2iQMQfiN+pAGv4DdqI0g7uwaLWxVWdnx3k0i64cdW3ZxmxS1E/686DJu311aKGpXJkTUOyIpPCdWs02lJdt/zMdfHCf+6nZKs/In5KK4+/uEGxP1crtGlrhGI+za/bBfKQcsIr8JU26ARfbWP2W//p+8h4zen4uel+NCRvRrYsJW4AsZGOzX8Ti++x8SQIcaSDTcuk4/Y93XWO8+6zuETc4sJ85jkyEXQPKYUrQQeRcWEdi3RqNlKY2YvzC8GWWmTJ3k2KU9yoqiYrWoqucixKzJg/wPTluKyD053d/j8dbLziJ4KDahPa50gSP1D9v6jQc8wrj8oQCWuNi6O5TssCAhaHe13xXH5XscoGDiezp5+M2rfWOR0xBHx4LRLldI75Qyb12yvbZ1+p+DYD+JnQyc/Yoq7emfzJOPItGY3f+bXFe8PWO0etKY0BLpoI5PlLk0hIqKZOu5VcAwZVU9vbr4cyKoLEsGPxLf8l/VAmULp8Wm4a2Wbm02qcOXJPP3ZAF6nJJSHS+iz/i13nRG7ZyXL4OA77THuLElKGehQ0456S8g5+s7Y6h5hspg==
template:
metadata:
creationTimestamp: null
name: mqtt2vl
namespace: home-assistant
labels:
app.kubernetes.io/name: mqtt2vl
app.kubernetes.io/component: mqtt2vl
app.kubernetes.io/part-of: home-assistant

View File

@@ -36,12 +36,25 @@ spec:
app.kubernetes.io/name: whisper
app.kubernetes.io/part-of: home-assistant
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: whisper
image: docker.io/rhasspy/wyoming-whisper:1.0.0
args:
- --model=base
- --language=en
env:
- name: HF_HOME
value: /data/hf.cache
ports:
- containerPort: 10300
name: wyoming

View File

@@ -55,12 +55,13 @@ spec:
nodeSelector:
node-role.kubernetes.io/zigbee-ctrl: ''
tolerations:
- key: du5t1n.me/machine
value: raspberrypi
effect: NoExecute
- key: node-role.kubernetes.io/zigbee-ctrl
effect: NoSchedule
- key: node-role.kubernetes.io/zwave-ctrl
effect: NoSchedule
containers:
- name: zigbee2mqtt
image: docker.io/koenkk/zigbee2mqtt:1.33.1
image: ghcr.io/koenkk/zigbee2mqtt:1.33.1
envFrom:
- configMapRef:
name: zigbee2mqtt

View File

@@ -57,12 +57,13 @@ spec:
nodeSelector:
node-role.kubernetes.io/zwave-ctrl: ''
tolerations:
- key: du5t1n.me/machine
value: raspberrypi
effect: NoExecute
- key: node-role.kubernetes.io/zigbee-ctrl
effect: NoSchedule
- key: node-role.kubernetes.io/zwave-ctrl
effect: NoSchedule
containers:
- name: zwavejs2mqtt
image: docker.io/zwavejs/zwave-js-ui:9.1.2
image: ghcr.io/zwave-js/zwave-js-ui:9.1.2
ports:
- containerPort: 8091
name: http

View File

@@ -154,8 +154,6 @@ spec:
while sleep 60; do php artisan schedule:run; done
env: *env
envFrom: *envFrom
securityContext:
readOnlyRootFilesystem: true
volumeMounts: *mounts
enableServiceLinks: false
affinity:

View File

@@ -1,170 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: portage
namespace: jenkins-jobs
labels:
app.kubernetes.io/name: portage
app.kubernetes.io/component: gentoo
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: binpkgs
namespace: jenkins-jobs
labels:
app.kubernetes.io/name: binpkgs
app.kubernetes.io/component: gentoo
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gentoo-dist
namespace: jenkins-jobs
labels:
app.kubernetes.io/name: gentoo-dist
app.kubernetes.io/component: gentoo
data:
rsyncd.conf: |+
[gentoo-portage]
path = /var/db/repos/gentoo
[binpkgs]
path = /var/cache/binpkgs
---
apiVersion: v1
kind: Service
metadata:
name: gentoo-dist
namespace: jenkins-jobs
spec:
selector:
app.kubernetes.io/name: gentoo-dist
app.kubernetes.io/component: gentoo
ports:
- name: rsync
port: 873
targetPort: rsync
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gentoo-dist
namespace: jenkins-jobs
labels: &labels
app.kubernetes.io/name: gentoo-dist
app.kubernetes.io/component: gentoo
spec:
selector:
matchLabels: *labels
template:
metadata:
labels: *labels
spec:
containers:
- name: rsync
image: docker.io/gentoo/stage3
command:
- /usr/bin/rsync
- --daemon
- --no-detach
- --port=8873
- --log-file=/dev/stderr
ports:
- name: rsync
containerPort: 8873
securityContext:
readOnlyRootFilesystem: true
runAsUser: 250
runAsGroup: 250
volumeMounts:
- mountPath: /etc/rsyncd.conf
name: config
subPath: rsyncd.conf
- mountPath: /var/db/repos/gentoo
name: portage
- mountPath: /var/cache/binpkgs
name: binpkgs
volumes:
- name: binpkgs
persistentVolumeClaim:
claimName: binpkgs
- name: config
configMap:
name: gentoo-dist
- name: portage
persistentVolumeClaim:
claimName: portage
---
apiVersion: batch/v1
kind: Job
metadata:
name: emerge-webrsync
namespace: jenkins-jobs
labels:
app.kubernetes.io/name: emerge-webrsync
app.kubernetes.io/component: gentoo
spec:
template:
spec:
containers:
- name: sync
image: docker.io/gentoo/stage3
command:
- emerge-webrsync
volumeMounts:
- mountPath: /var/db/repos/gentoo
name: portage
restartPolicy: OnFailure
volumes:
- name: portage
persistentVolumeClaim:
claimName: portage
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: sync-portage
namespace: jenkins-jobs
labels:
app.kubernetes.io/name: sync-portage
app.kubernetes.io/component: gentoo
spec:
schedule: 4 19 * * *
jobTemplate:
spec:
template:
spec:
containers:
- name: sync
image: docker.io/gentoo/stage3
command:
- emaint
- sync
volumeMounts:
- mountPath: /var/db/repos/gentoo
name: portage
restartPolicy: OnFailure
volumes:
- name: portage
persistentVolumeClaim:
claimName: portage

View File

@@ -9,8 +9,20 @@ resources:
- jenkins.yaml
- secrets.yaml
- iscsi.yaml
- gentoo-storage.yaml
- ../ssh-host-keys
- ssh-host-keys
- workspace-volume.yaml
- updatecheck.yaml
configMapGenerator:
- name: updatecheck
namespace: jenkins
files:
- config.toml=updatecheck.toml
options:
disableNameSuffixHash: true
labels:
app.kubernetes.io/name: updatecheck
app.kubernetes.io/component: updatecheck
patches:
- patch: |
@@ -22,3 +34,29 @@ patches:
spec:
volumeName: jenkins
storageClassName: ''
- patch: |-
apiVersion: batch/v1
kind: CronJob
metadata:
name: updatecheck
namespace: jenkins
spec:
jobTemplate:
spec:
template:
spec:
nodeSelector:
network.du5t1n.me/storage: 'true'
- patch: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: updatecheck
namespace: jenkins
spec:
storageClassName: synology-iscsi
images:
- name: docker.io/jenkins/jenkins
newTag: 2.528.2-lts

View File

@@ -73,3 +73,41 @@ spec:
name: rpm-gpg-key-passphrase
namespace: jenkins
type: Opaque
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: kmod-signing-cert
namespace: jenkins
spec:
encryptedData:
data: AgCWHnNGTMP9xpyjvYHCNZhmhBY5NhA6hGm+VDaEKUTTUsoHFYwNoL8Z6PwKqDMyo3NfZ/QR74+zpuGxidGLKfSWKXAR6vkHeTD5wGmZLetwzmFRPL/AjUJU4bBYBKfbfUHJDBUhgcGu5CKROW1ChNjf+EvaevrI9yTNe3Pgu1/Cqv+jiBkSCH1PMFiwNZIuYC1eFTs+NGUEyCU0bbkBW4wvrCm/RXiw5OAun8rxVfMa79HmyIXzM6gvnZxArcL4oHhmAOo3wzqOqgBgJ1VLVPcppVHzr3bawSRZ7kkcuV+CLBaT2/76/Gmsu6o8bAWIpgKIs8d0+ZE3y3cUjqYpVS9qHcgLlF5HG8nus68HpSRUd2rSDBWeLcKq1WfNcbFWIk0/lniI1zs6UjbSbbwblHy1g/eHYU7FOzi7L1S8tgwwUn0i2o4+yXrw+o4PuLukDGxZid5fTI5/bfsrdcYvf7WGC3WNMs5F35UGTtUmPrVs+n8V0Hcjb4w/SWhdsaMG/FQunO3nIDuC7YUaPYlv2Em4Eetwwuon46weHAPcPkdIJge0+f7BMZGsxz5MRdrxGgsmdPB/JDNs2DvuGyFKiyPjsRNDuS5JDapEiqsvyvzOpmoO3bkpHw7iPHC6ocqA3cBiwYypk4tHPG74UgrchpADI30QFTYj9PAyoEng0Gcy+7tFUva2Jum46NjMFLxyDTm6GRf+PBt3oQKauHcWitEo2fl1qk22tP0DXtG4ylcI1ZM8SpBGZRVeOsg+x+sjbAAcXGyY3XXlhiqjGRiiPiphhYq8qpbsiklIxJ+D/IJyz3wIIuAqukIxI54RnPRpKrxL4FfZacpffnQdXu8tkyHE5ZVxbHGtHE0f/ZN7DuNVgm4xCNyww5bQExArkhtJNYDUf9MlzyvXFAVQQsXcamWOUquJ+Z5fnR61pHFCSJmy4hZ+8aaosu0fgqyebOa/JQDbKs3GeZ/dg3P10Sa/AVj0Ry9IiGkPznv2W+m2Qaepoa5sVsH4/WWE4gns5Js3KewdFdLO8E9IkWJoAOixtRL8dWNW9qLt/SwprwhNeTE4cX/PCnYqAFqZyBWHjsNHOockLGJOfOoADdh9Jm9Ap5BGlWqkfpacO6cWohw2DMMP+ut1zjD5GoEPTX5Y2yota7FR9q6eFlzKysizRA+3Tzvb1AYu1FORdMebgBflpw9efhbShRHBpaRz2GJ6Wh307gF3SCcu9sgyk29zUBNKLR06e5mIHYXVFPVgOboI96gDrYuBSPuPJhs09GLdt0qwMBPbgL0SzPcOvHTmXqr6FpsJj68TffcbjAuKdYEOYRQ1DbOeWfK1c8+NACKBPjSfNCB4EdkRIQhZ2SrY4wtbnUDkdPdHv3zl52aIbj/gmMit+wx7Ej0LrzftWnmF+4v06zL6ZtZx4gLU4FNN7QLKOUzEgE64Y20yRiTKDRPLV0O0UsunJU9QPQvfJkR2PAlLBt1cJdl3vJp2pNYGqyF6UPfryoNkBCvwaoa2cnU7jFKK4/985fS6N6eB9q5cYdusVWIz6O5ENgn9ipF4w9tFk3oJrebkcvssWAsHMEu3G8XvdgLFG14enP/JeKSRrAHoYq3QPOlZq8QyYzEVr0FeHVEGSszc+HwuRzwkOffJdE9iGySsGHOBigonSiWfAB+Dy4/3Tt7lD53uN98QZVXlJRiffvvPIn5F2VMMq9/wB5q5Gy+m2TZiQ/4xRaZAGnm8W2ultZ6AIrMnb6rFgShbElYBJuK25w2EXzY/nR5OIdsFEkJ5bxeHJ/O4942U3WxnyZ8Zcy7KVrtEG6MNBmT/TneA49MwkYlyLpm7xPxMnhRZhx0/izwjtbWZRp+2L7r2i36Ag5sLuT0K8bUc8UCzBiYVSlo2lQ8Z2JmpK7IJQkuDSr0hHJ259r+FL4PVc+cM4kINctWvh6JduhF0wx0ltBuOdlfKq1m6K3vmO3R2N8NybzkmK4IOAqcjUAiQBtICzhVfYmJXCKvFa21uF37Izbx2ty+KJyR/vr8/qyqcuO8QHheFm0wajuhKdPX+tlhkF6sn1k0CV2vub7Tcl5JHws3GM38resQsIlm+/rr5Bd/hpzUPQFC5KQsyXYl96lI2+p4h0lomOpZFMPHGIAlSOUeOpv3uU+7uqONdNr9KasiICv8WUOi/r4iLMmFb/SxGgQIYIU3hm4C0g6Mwn1Zmi09kJ98bhlTxbIedjw/pyc/TirLDaKsH9/yNRyt5rRYPWKVObhYrZWERHn98uc7/T8hDQqbn1EqIeocdAyFcNsv8UUopyC7LSCQnQ1fZnUVV5+YI3rnZ0gh5K/lwvD1MmnbxEUNFGFudwy3O3i8tSp7G+KL+pZxHLk8pyZNX8949vAyaE5lgrI9DLnJllocjlh5EzdXKaH7yrEQHK4ldU/BR6jLPlpLhM3GJ1tCfHPlqtz7D88xW8/cV44hobw7NBn9jO7B+myY4rygt6nj5wPWMbcQDtPldS41IFcEpd0XZa6kO/tOdBZ3sXmNilJhQhj57ZFqm9bL8oGWqMXjevfdL3HKUpOiza98GPk55kUzMSbhfdCJsYC1o3QN66aLpf1PaES0NENwYX1IycN3aJYqJOUicyg4oPAXXK+DYBnknfj8NCak2zSELimOVTeIlQmCczx4mBgn/5eGxXytXIdjHYfNj/1raCqkaEhC0X1Q21asNrIP3DYFM3KFIvOCZt7TCMeG8q5L4estYCncr1HAgu9CMFrq4vEDW1llPvAyr5dwFR52VnMPffwYEBpRG6xN7QSn8HYBkx4XIBLlKQH0GZtRjT4bpBmeKPH0fai5e36uCy+OLraqB05lUCJwHw+ej3ymWIe8osi+uRMZHCTBo6DNbbOTIcRfScRKJMcNlXu3v5+wZq5UT52TTaqUkHV/eEX2kFYuqbkfdtwizbN/JMomTClbAKpjpYk2zR1Jjj/f/2H8nlLYZprkiLGQcI/4dk/r0MASN3d3jiSPR+Y1FVxnedSh0qhxqDKq6rQG5jSd9muZJaZL23E7qeQ+24QhmAtWQtnp3z+iX79T6IIaDBeM7sTRr0jaJ6k2Id8EZ0QVI80aSoYaYLQO15gXY0DldqbuGy/J+Ugh3GPMVUtJdUm+JnRmxDKTdeK9RPdNtEE3Bcl9ERu46+kIhaK1K2HVQqlHoIKplVxj/Avncv9MeOsIZIpP0jDDlTAd49d8HJ1uQWi0A5HzM2jVgqvvVtWHtjwtUDhd4YdSDUPlkx8xqunkcPakj/QZtEc4JfioriyZWG2v29pRxrxSaiGe9AquCJMtncChRVXWm1A80DvEKoFByFbyS5P6T41oTE6zysxAt1JGBRpimQHE3rRzYyqLXsCQUP9TN+3Y/84zZXZKeOgK4KoHXxTRLZL/CDXDE4lvbbtTzPcQ8UpgGb3tBgp0ui40qBWREo/TU4zelqSNVNROO5VbPakx+aaZ2tuajmxkp8CmyrBzM8wuY2m6AokRp1TN+rsSHk+UuAqSNM9lgVleWuKoEoBF0wT3YlqN/hxc2zWmyb/oCHh5hexiO8fbZUwkbyprIpZ3s5IdxbrtDYEPFzIdmuAQVN/GQ1tu86Cc4g4rHZqJ3w614/NzgRj+0RPAFrp1NC1zZzuc36TfsazzyDPZ1r8rwyJ2HWjO/cemTPrPQYdFY5SrAklBs6Otc89IdbKn5eSVKIyTQbqSP7lYAKEHLQdUoCAVRv7ghVaXCyprNfpr9bpUJ/mGoikNi3m0o83E8R7yp5/nbgG6rgEyza0LQHVF1mEOqwi2LKB6u5F0KjsGhqr4OLegf8GxYX8bPNfdD3XCpG/nqNYA5eJgA68qViJdU15FnR4kzLrWlJfVB8TwMIb1pAq8itgzveVKXUJS/ZS6kkFdH18LMHbw16tHBmTDewtzdbv+vBfXuZQz3Ay8nMAe/jWeAaujlVfTFKJ0L03kXvr59DIxnEODdvAtPhPXRyqfLbN9ffj3Jtn5ermKUnBMNNqeSJV8GFyIZXkemwX9CkXsrJvV1EreheRWaW6UD7104rJY/EQ1jg6BaCaZ1R34DIO8VHG33GkqvgzK8QJgrvSGuUDxHtOyzm/hwh6odPqqSRFdEMgU2GUNNLmO6xoHh2Q6GnE2PaYjKEw+Spm0iAoSgA349ElDOToGgPyOusMKroHKTbxGVqC8A4iVaNiNnEHm066bJCs1TlMxXQ5ob+gzN+1KJyxzwp47HHVeFQqTIwcsE+4WVUY7116oUHzyzHUxwsuGfS22I/JSrnelcx7vHPNRC5slAoe6aQ929q9f+kknr8EyifD9Fxbw1f+GpCvrujW+AoyQhBcD5F+KdvECRZ9WA7JNebmb5Mq0Qst4a5c4UKyk6OqNhj1UY0FWl7N1yO5jvX37rSbtcX2loTIBYksIJ/dYK2FJjO0Vs6/xBeJALMR+Sh3pVuVf+Ez5WlCvVrUT51g2LO2NVocBoAF1S9aQw7x6ex3tAYatgRC5JUWAOjpC6F7I=
template:
metadata:
name: kmod-signing-cert
namespace: jenkins
annotations:
jenkins.io/credentials-description: Kernel modules signing certificate
labels:
jenkins.io/credentials-type: secretFile
data:
filename: signing_key.pem
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: webhook-trigger
namespace: jenkins
spec:
encryptedData:
text: AgA6vfZFOXBx/5bBP1K3GyOq3XQVIVMQ7j48jgJdwm5c4dEKl1HJgoNFiE2NWiX+ALx4ebmGj5FovCajkJ2YIzqNbI+wZeEoH1gIO0WmQcVlEg0zparSTkll/D+eguoPB6YtSr2T+g30FZFnzWQWMetxydWHX/pqx7e/6L3M+50IYfjQ8Vcx/QGe9KAiMHapEdxAEmoxdrrtsKlGo5Bh4JNoFWYf9ZcvTrYmrUtR+PYfZcxTulP0Cn6q3yNG4DC19WK0OXI1lmuZ36aUYJJrvu+rT+SB6JntMMmCLiBkcbhjxA2fTuNcYoAAXqwZjrFjXRBxuww75Wm/v+BqxGkkaJrZdhIv/maqZggLb1sHE9HyulSQ536R8f0FEt0FXX0hn1VuckWLZvDEOpcude8EEJ7DE+/Qya6fdx7ZLtKCefViTj/R8xDPoicbmgToJaZ1qnH3QFkzPXtHhzFFM4rks+i1Nz14A5kg2APiw1QQsOTp/wp2S2LXnU3gR/LQa1uEpM82KrKQfJ4TATp1oip2wemwm+burgv1DGoRBYNYM6huaS6g5u76/Vo0PTrQAEe8uIus9RnZhZ4Wp5NtLUlGM0B5oL2O4ySeBGjpM5w1UGswqjFBcP6MNQPuJEBsKhbqSUR/E1pgH9pm065XJo8SWpWqiYdNsY+s2fhb9SMiQxFStAF0Mm8lvN3hnXdfGjoyufJMcKtodLzC5sPwJHNJLkod2qIA+OY+c/gHplCPhcvPOg==
template:
metadata:
name: webhook-trigger
namespace: jenkins
annotations:
jenkins.io/credentials-description: Generic Webhook Trigger token
labels:
jenkins.io/credentials-type: secretText

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: jenkins-jobs
resources:
- ../../ssh-host-keys

13
jenkins/updatecheck.toml Normal file
View File

@@ -0,0 +1,13 @@
[storage]
dir = "/var/lib/updatecheck"
[[watch]]
packages = "kernel"
[watch.on_update]
url = "https://jenkins.pyrocufflink.blue/generic-webhook-trigger/invoke"
coalesce = true
[[watch.on_update.headers]]
name = 'Token'
value_file = '/run/secrets/updatecheck/token'

74
jenkins/updatecheck.yaml Normal file
View File

@@ -0,0 +1,74 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: updatecheck
namespace: jenkins
labels:
app.kubernetes.io/name: updatecheck
app.kubernetes.io/component: updatecheck
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 300Mi
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: updatecheck
namespace: jenkins
labels: &labels
app.kubernetes.io/name: updatecheck
app.kubernetes.io/component: updatecheck
spec:
schedule: >-
22 */4 * * *
concurrencyPolicy: Forbid
jobTemplate:
metadata:
labels: *labels
spec:
template:
metadata:
labels: *labels
spec:
restartPolicy: Never
containers:
- name: updatecheck
image: git.pyrocufflink.net/infra/updatecheck
args:
- /etc/updatecheck/config.toml
env:
- name: RUST_LOG
value: updatecheck=debug,info
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/updatecheck
name: config
- mountPath: /run/secrets/updatecheck
name: secrets
readOnly: true
- mountPath: /var/lib/updatecheck
name: data
securityContext:
runAsUser: 21470
runAsGroup: 21470
fsGroup: 21470
runAsNonRoot: true
volumes:
- name: config
configMap:
name: updatecheck
- name: data
persistentVolumeClaim:
claimName: updatecheck
- name: secrets
secret:
secretName: webhook-trigger
items:
- key: text
path: token
mode: 0440

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: buildroot
namespace: jenkins-jobs
labels:
app.kubernetes.io/name: buildroot
app.kubernetes.io/component: jenkins
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: synology-iscsi

View File

@@ -0,0 +1,36 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins.k8s-reboot-coordinator
labels:
app.kubernetes.io/name: jenkins.k8s-reboot-coordinator
app.kubernetes.io/component: k8s-reboot-coordinator
app.kubernetes.io/part-of: k8s-reboot-coordinator
rules:
- apiGroups:
- apps
resources:
- daemonsets
resourceNames:
- k8s-reboot-coordinator
verbs:
- get
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins.k8s-reboot-coordinator
labels:
app.kubernetes.io/name: jenkins.k8s-reboot-coordinator
app.kubernetes.io/component: k8s-reboot-coordinator
app.kubernetes.io/part-of: k8s-reboot-coordinator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins.k8s-reboot-coordinator
subjects:
- kind: ServiceAccount
name: default
namespace: jenkins-jobs

View File

@@ -0,0 +1,37 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
labels:
- pairs:
app.kubernetes.io/instance: k8s-reboot-coordinator
includeSelectors: true
resources:
- https://git.pyrocufflink.net/dustin/k8s-reboot-coordinator//kubernetes?ref=master
- service.yaml
- jenkins.yaml
images:
- name: k8s-reboot-coordinator
newName: git.pyrocufflink.net/packages/k8s-reboot-coordinator
newTag: latest
patches:
- patch: |-
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: k8s-reboot-coordinator
spec:
template:
spec:
containers:
- name: k8s-reboot-coordinator
imagePullPolicy: Always
env:
- name: RUST_LOG
value: k8s_reboot_coordinator=debug,info
imagePullSecrets:
- name: imagepull-gitea

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: &name k8s-reboot-coordinator
labels: &labels
app.kubernetes.io/name: *name
app.kubernetes.io/component: *name
app.kubernetes.io/part-of: *name
spec:
selector: *labels
ports:
- port: 8000
targetPort: http
name: http

View File

@@ -20,6 +20,11 @@ vrrp_track_process rabbitmq {
weight 90
}
vrrp_track_process hbbs {
process hbbs
weight 90
}
vrrp_instance ingress-nginx {
state BACKUP
priority 100
@@ -58,3 +63,16 @@ vrrp_instance rabbitmq {
rabbitmq
}
}
vrrp_instance hbbs {
state BACKUP
priority 100
interface ${INTERFACE}
virtual_router_id 54
virtual_ipaddress {
172.30.0.150/28
}
track_process {
hbbs
}
}

View File

@@ -18,7 +18,7 @@ spec:
command:
- sh
- -c
- |
- | # bash
printf '$INTERFACE=%s\n' \
$(ip route | awk '/^default via/{print $5}') \
> /run/keepalived.interface
@@ -28,7 +28,7 @@ spec:
subPath: run
containers:
- name: keepalived
image: git.pyrocufflink.net/containerimages/keepalived:dev
image: git.pyrocufflink.net/containerimages/keepalived
imagePullPolicy: Always
command:
- keepalived

View File

@@ -49,6 +49,8 @@ spec:
mountPath: /kitchen.yaml
subPath: config.yaml
readOnly: true
nodeSelector:
kubernetes.io/arch: amd64
securityContext:
runAsNonRoot: true
runAsUser: 17402

View File

@@ -48,8 +48,9 @@ spec:
calendar_url: >-
https://nextcloud.pyrocufflink.net/remote.php/dav/calendars/B53DE34E-D21F-46AA-B0F4-1EC0933AE220/projects_shared_by_332E433E-43B2-4E3D-A0A0-EB264C624707/
dtex: &dtex
<<: *credentials
calendar_url: >-
https://outlook.office365.com/owa/calendar/0f775a4f7bba4abe91d2684668b0b04f@dtexsystems.com/5f42742af8ae4f8daaa810e1efca6e9e8531195936760897056/S-1-8-960331003-2552388381-4206165038-1812416686/reachcalendar.ics
https://nextcloud.pyrocufflink.net/remote.php/dav/calendars/B53DE34E-D21F-46AA-B0F4-1EC0933AE220/pyrocufflinknet-1/?export
agenda:
calendars:
@@ -73,13 +74,13 @@ spec:
weather:
metrics:
temperature: >-
homeassistant_sensor_temperature_celsius{entity="sensor.outdoor_temperature"}
round(homeassistant_sensor_temperature_celsius{entity="sensor.outdoor_temperature"}, 0.1)
humidity: >-
homeassistant_sensor_humidity_percent{entity="sensor.outdoor_humidity"}
round(homeassistant_sensor_humidity_percent{entity="sensor.outdoor_humidity"}, 0.1)
wind_speed: >-
homeassistant_sensor_unit_m_per_s{entity="sensor.wind_speed"}
round(homeassistant_sensor_unit_m_per_s{entity="sensor.wind_speed"}, 0.1)
pool: >-
homeassistant_sensor_temperature_celsius{entity="sensor.pool_sensor_temperature"}
round(homeassistant_sensor_temperature_celsius{entity="sensor.pool_sensor_temperature"}, 0.1)
homeassistant:
url: wss://homeassistant.pyrocufflink.blue/api/websocket

View File

@@ -0,0 +1,42 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet-csr-approver
rules:
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- get
- update
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests/approval
verbs:
- update
- apiGroups:
- certificates.k8s.io
resourceNames:
- kubernetes.io/kubelet-serving
resources:
- signers
verbs:
- approve
- apiGroups:
- ""
resources:
- events
verbs:
- create

View File

@@ -0,0 +1,53 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubelet-csr-approver
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
app: kubelet-csr-approver
template:
metadata:
annotations:
prometheus.io/port: '8080'
prometheus.io/scrape: 'true'
labels:
app: kubelet-csr-approver
spec:
serviceAccountName: kubelet-csr-approver
containers:
- name: kubelet-csr-approver
image: postfinance/kubelet-csr-approver:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
args:
- -metrics-bind-address
- ":8080"
- -health-probe-bind-address
- ":8081"
- -leader-election
livenessProbe:
httpGet:
path: /healthz
port: 8081
env:
- name: PROVIDER_REGEX
value: ^[abcdef]\.test\.ch$
- name: PROVIDER_IP_PREFIXES
value: "0.0.0.0/0,::/0"
- name: MAX_EXPIRATION_SEC
value: "31622400" # 366 days
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Equal

View File

@@ -0,0 +1,42 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
labels:
- pairs:
app.kubernetes.io/instance: kubelet-csr-approver
resources:
- clusterrole.yaml
- deployment.yaml
- rolebinding.yaml
- serviceaccount.yaml
patches:
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubelet-csr-approver
namespace: kube-system
spec:
template:
spec:
containers:
- name: kubelet-csr-approver
imagePullPolicy: IfNotPresent
env:
- name: PROVIDER_REGEX
value: ^(i-[a-z0-9]+\.[a-z0-9-]+\.compute\.internal|k8s-[a-z0-9-]+\.pyrocufflink\.blue|[a-z0-9-]+\.k8s\.pyrocufflink\.black)$
- name: PROVIDER_IP_PREFIXES
value: 172.30.0.0/16
- name: BYPASS_DNS_RESOLUTION
value: 'true'
replicas:
- name: kubelet-csr-approver
count: 1
images:
- name: postfinance/kubelet-csr-approver
newName: ghcr.io/postfinance/kubelet-csr-approver
newTag: v1.2.10

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-csr-approver
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet-csr-approver
subjects:
- kind: ServiceAccount
name: kubelet-csr-approver
namespace: kube-system

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubelet-csr-approver
namespace: kube-system

View File

@@ -0,0 +1,20 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: music-assistant
labels:
app.kubernetes.io/name: music-assistant
app.kubernetes.io/component: music-assistant
spec:
ingressClassName: nginx
rules:
- host: music.pyrocufflink.blue
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: music-assistant
port:
name: http

View File

@@ -0,0 +1,21 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: music-assistant
labels:
- pairs:
app.kubernetes.io/instance: music-assistant
includeSelectors: true
- pairs:
app.kubernetes.io/part-of: music-assistant
includeTemplates: true
resources:
- namespace.yaml
- music-assistant.yaml
- ingress.yaml
images:
- name: ghcr.io/music-assistant/server
newTag: 2.6.3

View File

@@ -0,0 +1,78 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: music-assistant
labels: &labels
app.kubernetes.io/name: music-assistant
app.kubernetes.io/component: music-assistant
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: music-assistant
labels: &labels
app.kubernetes.io/name: music-assistant
app.kubernetes.io/component: music-assistant
spec:
ports:
- port: 8095
name: http
selector: *labels
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: music-assistant
labels: &labels
app.kubernetes.io/name: music-assistant
app.kubernetes.io/component: music-assistant
spec:
serviceName: music-assistant
selector:
matchLabels: *labels
template:
metadata:
labels: *labels
spec:
containers:
- name: music-assistant
image: ghcr.io/music-assistant/server
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8095
name: http
readinessProbe: &probe
httpGet:
port: http
path: /
failureThreshold: 3
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 1
startupProbe:
<<: *probe
failureThreshold: 90
periodSeconds: 1
volumeMounts:
- mountPath: /data
name: music-assistant-data
subPath: data
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsNonRoot: true
runAsUser: 8095
runAsGroup: 8095
fsGroup: 8095
volumes:
- name: music-assistant-data
persistentVolumeClaim:
claimName: music-assistant

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: music-assistant
labels:
app.kubernetes.io/name: music-assistant

View File

@@ -20,4 +20,4 @@ configMapGenerator:
images:
- name: docker.io/binwiederhier/ntfy
newTag: v2.11.0
newTag: v2.15.0

View File

@@ -54,6 +54,7 @@ spec:
containers:
- name: ntfy
image: docker.io/binwiederhier/ntfy:v2.5.0
imagePullPolicy: IfNotPresent
args:
- serve
ports:

View File

@@ -45,8 +45,8 @@ patches:
images:
- name: ghcr.io/paperless-ngx/paperless-ngx
newTag: 2.14.7
newTag: 2.20.0
- name: docker.io/gotenberg/gotenberg
newTag: 8.17.1
newTag: 8.25.0
- name: docker.io/apache/tika
newTag: 3.1.0.0
newTag: 3.2.3.0

View File

@@ -80,6 +80,8 @@ spec:
value: '1'
- name: PAPERLESS_ENABLE_FLOWER
value: 'true'
- name: PAPERLESS_OCR_USER_ARGS
value: '{"continue_on_soft_render_error": true}'
ports:
- name: http
containerPort: 8000
@@ -124,7 +126,7 @@ spec:
- name: tmp
mountPath: /tmp
- name: run
mountPath: /run/supervisord
mountPath: /run
- name: logs
mountPath: /var/log/supervisord
subPath: supervisord

30
policy/README.md Normal file
View File

@@ -0,0 +1,30 @@
# Cluster Policies
## Validating Admission Policy
To enable (prior to Kubernetes v1.30):
1. Add the following to `apiServer.extraArgs` in the `ClusterConfiguration` key
of the `kubeadm-config` ConfigMap:
```yaml
feature-gates: ValidatingAdmissionPolicy=true
runtime-config: admissionregistration.k8s.io/v1beta1=true
```
2. Redeploy the API servers using `kubeadm`:
```sh
doas kubeadm upgrade apply v1.29.15 --yes
```
### disallow-hostnetwork
This policy prevents pods from running in the host's network namespace. This is
especially important because most nodes are connected to the storage network
VLAN, so allowing pods to use the host network namespace would give them access
to the iSCSI LUNs and NFS shares on the NAS.
If a trusted pod needs to run in the host's network namespace, its Kubernetes
namespace can be listed in the exclusion list of the
`disallow-hostnetwork-binding` policy binding resource.

View File

@@ -0,0 +1,43 @@
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingAdmissionPolicy
metadata:
name: disallow-hostnetwork
spec:
matchConstraints:
resourceRules:
- apiGroups:
- ''
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- pods
validations:
- expression: >-
!has(object.spec.hostNetwork) || !object.spec.hostNetwork
message: >-
Pods must not use hostNetwork: true
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingAdmissionPolicyBinding
metadata:
name: disallow-hostnetwork-binding
spec:
policyName: disallow-hostnetwork
validationActions:
- Deny
matchResources:
namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: NotIn
values:
- calico-system
- democratic-csi
- keepalived
- kube-system
- music-assistant
- tigera-operator

View File

@@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- disallow-hostnetwork.yaml

1
receipts/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
firefly.token

12
receipts/config.toml Normal file
View File

@@ -0,0 +1,12 @@
[default.firefly]
url = "https://firefly.pyrocufflink.blue"
token = "/run/secrets/receipts/secrets/firefly.token"
search_query = "tag:Review has_attachments:false type:withdrawal has_any_bill:false"
default_account = "Amazon Rewards Visa (Chase)"
[default.databases.receipts]
url = "postgresql://receipts@postgresql.pyrocufflink.blue/receipts?sslmode=verify-full&sslrootcert=/run/dch-ca/dch-root-ca.crt&sslcert=/run/secrets/receipts/postgresql/tls.crt&sslkey=/run/secrets/receipts/postgresql/tls.key"
[default.limits]
file = "4MiB"
data-form = "4MiB"

28
receipts/jenkins.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins
rules:
- apiGroups:
- apps
resources:
- deployments
resourceNames:
- receipts
verbs:
- get
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: default
namespace: jenkins-jobs

View File

@@ -0,0 +1,66 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
transformers:
- |
apiVersion: builtin
kind: NamespaceTransformer
metadata:
name: namespace-transformer
namespace: receipts
setRoleBindingSubjects: none
fieldSpecs:
- path: metadata/namespace
create: true
labels:
- pairs:
app.kubernetes.io/instance: receipts
includeSelectors: true
- pairs:
app.kubernetes.io/part-of: receipts
includeTemplates: true
resources:
- namespace.yaml
- secrets.yaml
- receipts.yaml
- postgres-cert.yaml
- ../dch-root-ca
- jenkins.yaml
configMapGenerator:
- name: receipts-config
files:
- config.toml
options:
labels:
app.kubernetes.io/name: receipts
app.kubernetes.io/component: receipts
patches:
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: receipts
spec:
template:
spec:
containers:
- name: receipts
volumeMounts:
- mountPath: /run/dch-ca
name: dch-root-ca
readOnly: true
- mountPath: /run/secrets/receipts/postgresql
name: postgresql-cert
readOnly: true
volumes:
- name: dch-root-ca
configMap:
name: dch-root-ca
- name: postgresql-cert
secret:
secretName: postgres-client-cert
defaultMode: 0640

7
receipts/namespace.yaml Normal file
View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: Namespace
metadata:
name: receipts
labels:
app.kubernetes.io/name: receipts
app.kubernetes.io/component: receipts

View File

@@ -0,0 +1,12 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: postgres-client-cert
spec:
commonName: receipts
privateKey:
algorithm: ECDSA
secretName: postgres-client-cert
issuerRef:
name: postgresql-ca
kind: ClusterIssuer

97
receipts/receipts.yaml Normal file
View File

@@ -0,0 +1,97 @@
apiVersion: v1
kind: Service
metadata:
name: receipts
labels: &labels
app.kubernetes.io/name: receipts
app.kubernetes.io/component: receipts
spec:
ports:
- name: http
port: 8000
selector: *labels
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: receipts
labels: &labels
app.kubernetes.io/name: receipts
app.kubernetes.io/component: receipts
spec:
selector:
matchLabels: *labels
template:
metadata:
labels: *labels
spec:
containers:
- name: receipts
image: git.pyrocufflink.net/packages/receipts
imagePullPolicy: Always
env:
- name: RUST_LOG
value: info,rocket=warn,receipts=debug
- name: ROCKET_ADDRESS
value: 0.0.0.0
ports:
- name: http
containerPort: 8000
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/receipts
name: config
readOnly: true
- mountPath: /run/secrets/receipts/secrets
name: secrets
readOnly: true
- mountPath: /tmp
name: tmp
subPath: tmp
imagePullSecrets:
- name: imagepull-gitea
securityContext:
runAsNonRoot: true
runAsUser: 943
runAsGroup: 943
fsGroup: 943
volumes:
- name: config
configMap:
name: receipts-config
- name: secrets
secret:
secretName: receipts
- name: tmp
emptyDir:
medium: Memory
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app.kubernetes.io/name: receipts
app.kubernetes.io/component: receipts
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: '0'
name: receipts
spec:
tls:
- hosts:
- receipts.pyrocufflink.blue
rules:
- host: receipts.pyrocufflink.blue
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: receipts
port:
name: http

35
receipts/secrets.yaml Normal file
View File

@@ -0,0 +1,35 @@
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: imagepull-gitea
namespace: receipts
labels: &labels
app.kubernetes.io/name: receipts
app.kubernetes.io/component: receipts
spec:
encryptedData:
.dockerconfigjson: AgCdye4FPceefzsWWdwX7BLLIkpCbJypTY/VMBHNZX4uNDjJiYICGtPFAbNceOnnBfKyQcXv47kfXgVWOzKl+OYv5ee9I3rsEpwXhU6zdkvRP2spZp/lXkDTrEitap3jcap3gGcK4j19ikXM42DfTCguSGkX5OM7jR7jg4xAQyB7M0FvZKkEnp9MwASp0+It3g4CxhQfQlrYOkbvuq7wY7qkpqHqoDVKOcKtmKM69HX6IMU5/gDFB3WZLdOkFxAhSQ6cEKJyqfyMx//nZlFw2jTFbpsiOBofQiqZ5dKFkz95OW22A6dcdxCoK1Xwmb2XvlD15wZ1ttaeh1GhpUWfqyKP9fePm+YAS4AvnPP0RurwpAKHh7C/EHKurwCt3o0UhfcQHDwhaIitA5c8lHEmDLPj76YGtjKreIH4cCEz3os6FyEg86pvfFHq4gjUKEV29qSuAEYYvfwAa7IRMjU5vjiD16EJ7/VaiKauKrA04tx53bq8Oq6oTZkOwO63ZU0kr82EJksPZ9jymHS7aq/cAnaXyZ2RamuT8HHGB/GZU6rXX/THaYww6Tii6al72EmGZ4OoY/Av+VXZBkxX1S762wbuA9KMwOG8raTPwXUVAm53Hl4E5piBAFMcGsboVdWNcKqr/yXWKeJfohlqKFr39g0aobekSB81ORAJEGHuSxE8tUdhfZYhbc5yemTzhuCu6iJFZj8yFPv6UwJV+OzSNQEuTZokyBNRPCteXh0xy2VxHZmp+oxakpM02oKPvS10z7yBIR0BgU9KddmqXozENekQP0v445i8BVVARpqoGFWBy3bbv4Z3suEJ8LIvb96vsq+bh0ia+DaslsnbXjiZ9XseGUrzYmWKZOBIFitpo181LJtWHRSU/GAm58GOUoVWCW66ldI79lZ4Z7xH+UJWGQIwbHQ+iky6Ooebsc42mdm3ToK4bi1Zkg4VdIxDAhFiPubOEmkacyoCKobqs+aeni6UB9lLjieClGWHNXdS7gQs4NPBE0dq2B0Sr2pBboA=
template:
metadata:
name: imagepull-gitea
namespace: receipts
labels: *labels
type: kubernetes.io/dockerconfigjson
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: receipts
namespace: receipts
labels: &labels
app.kubernetes.io/name: receipts
app.kubernetes.io/component: receipts
spec:
encryptedData:
firefly.token: AgBBu2w5ddlqY2b/Si6nLowW/3cTIt8fBZi97aMUIY6BLKHWgDxdOIWKJTlaG5GKNRJNDwTxcn5Tld6rBVfxkkjf2eUNuq4bfclrSOp1MysTH0zwN9ctA0Pi+u9id2lo44gEUuzrrm658aqJG4ZoX3Mw2FmBD9V1WzDQC/pa5fQrfyoMrdNBMpmtk0lf+fzNa/1QJxtoim35ekMy1+Fy1qycy1XsW5s8Z02vLF9o0Tv2GGQTK/VwJoJqEzTgIuGDlaipOji65YN7L9OkBeAK8ZcbPgjfjae7UNS8rXQKW1Q/UOta4z3/EYB3yLxC8y4osRt/0k0m+ApW8nxdZLWVFBLZFUbSvOV4M7r+2/PvqIjJww6wUDwtkAR89Orz2ceJjKCKgJxCHjGUabaAwM2wRmBm6d2BZOfuUxEhXMAUvEL5aFIaXAkePhdFDo3iX1tJXStAk9Iqx/cXT9l3CArsTrnit+NLwNGuqDq2T2I5VZ9Qh8LsO6BbOHm+qhycnl8/FCQt0AF7RYE4r6/OehdjPivNzRNDqh2P0cllw4mB06GCwK84mmfW7pJvbYLlpdtr2AMoYZGoQ23uTeXSOKWzdMT7sY/IUT5nAY4WPTkiy8OYxoR4/fw90d3UysmjunFr9SwJM/pzaKfmsO7IatV5Lnayecrilku0iFK0zKhkmfuEaK3CeLIAwxofWwD1iSqXtRvnhHG7KBMuQo0UyW9DGXqVVNBhDQ5393+8HRhsw6qQsZbd43cIJmCYD957K4rz7BsW6xHyTl7MtG237ljNS0V/fKIb99VvMDKCjAD6D7Bbn+swNglVGGOK+HwGNiQQ7A9sQE/tGMoNngj0Z4ASB4HDhkKc4BguRLsmALhn6X+mUxgNt/yQO/tIctl5KvhKhDfxpmwo4ZLZ/QWZoVHKLY651Ni9CLt0ozI3/B9OxYvewXXXFTIZYJJU91d46WaxdqwQm5OesUA7wAFymZ7CCqUHEaoP+hAkYMu77NyuiOZC9dL7HEXPHIRGvUirD0J8TdTLpkCRHvsbjgc9UUqVImlKpQ1G1PDcnuyClZyzh9itw+rUqeKXfeupclH0MK6TjvX8aRMVvDqRKeZvklsezxZPfwpUsXC+TN9745YLporVENvmk2XlHJbcyihYldVHFSOczcznxLYibSyCPN5cRue7ENE9aYjLZI3FddV8XYOGJ5mOo50n6H0iI0fkEzCX9VMYqMk+XwGJatzA1JHFL4VP8apSG3Y5boplLW2T2aQgVgRw7bsyCnq4UoFKrLuO9ZK4K6kGZj0qHnWrft7JmItZHOj9oBsHgjG1mQHQsxR7+UDHQ5Nr4eb0TAVpUsos1pcpzOVEmvnDh6pQ5bo4mA2Z/qGn/BWVcz9CsR1nKZOO1E+HNnFeYD9xKucBCm3mlrtr8QoKmrqBNiKN0Oz3wOqPtQTY6SZzKhSXkGmc2Lr2w8cIEtw8N+T3vaAdyUWhpkh/ZILW3YE9jMNr1cukbiiW4++9iU+R9heJNsR2nVdAJJoZyFeWjZQbfP8wq1P+i5W06hg8l7IEbvkOZX9DfpP5K4WV+uwkhZx6LpGhY957WgZOlvtwxwqC35KLspZStTnnCmfw130mwMx0paXXIQNWMVd2ob12e5Uzcg8gzy0LBgvVehk9ZUttxPdtZcjp5h+oiKLp+ruC1dOfB9PIy0rUp4d4EbeMO2h5c5hyXzcbZpclxOrN9JhGf3HnnP/XcMlJ8mIt319jdfIsOC+2OCEkgtywEupSTMeSdBm9p1Sr6OhOpY6T+Iv3ni9nhMfng83e2lGhQIckecMQ5xm7RJfD+5p0kmD3YdqecALePSfFLspXxkHz0CExvMpvqbu6Gmmz2U2UzooM+sTdlGGbqwSRu6ZhuVncjxIa3WlsNzm7I50EpsEwzprFBPDin0eqFJuEE9Gz224ZlbA3ulo/ITXYKDBe5Rlq2HzhS59J/KjZqw1mt8a+lrDNKygxLZtD0qksk1ngeV4m+DITU6iyo8MWmTNz9deD3w==
template:
metadata:
name: receipts
namespace: receipts
labels: *labels

2
restic/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
credentials
password

54
restic/kustomization.yaml Normal file
View File

@@ -0,0 +1,54 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: restic
labels:
- pairs:
app.kubernetes.io/instance: restic
includeSelectors: true
- pairs:
app.kubernetes.io/part-of: restic
includeTemplates: true
resources:
- namespace.yaml
- network-policy.yaml
- restic-prune.yaml
- secrets.yaml
- ../dch-root-ca
configMapGenerator:
- name: restic-env
envs:
- restic.env
patches:
- patch: |-
apiVersion: batch/v1
kind: CronJob
metadata:
name: restic-prune
spec:
jobTemplate:
spec:
template:
spec:
containers:
- name: restic-prune
imagePullPolicy: IfNotPresent
env:
- name: RESTIC_CACERT
value: /run/dch-ca/dch-root-ca.crt
volumeMounts:
- mountPath: /run/dch-ca
name: dch-ca
readOnly: true
volumes:
- name: dch-ca
configMap:
name: dch-root-ca
images:
- name: ghcr.io/restic/restic
newTag: 0.18.0

6
restic/namespace.yaml Normal file
View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: restic
labels:
app.kubernetes.io/name: restic

View File

@@ -0,0 +1,24 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restic
labels:
app.kubernetes.io/name: restic
app.kubernetes.io/component: restic
spec:
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
- to:
- ipBlock:
cidr: 172.30.0.15/32
ports:
- port: 443
podSelector: {}

Some files were not shown because too many files have changed in this diff Show More