Zigbee2MQTT needs to be able to read and write to the serial device for
the ConBee II USB controller. I'm not exactly sure what changed, or how
it was able to access it before the recent update.
The _dialout_ group has GID 18 on Fedora.
The Raspberry Pi in the kitchen now has Firefox installed so we can use
it to control Home Assistant. By listing its IP address as a trusted
network, and assigning it a trusted user, it can access the Home
Assistant UI without anyone having to type a password. This is
particularly important since there's no keyboard (not even an on-screen
virtual one).
Moving the `trusted_networks` auth provider _before_ the `homeassistant`
provider changes the login screen to show a "log in as ..." dialog by
default on trusted devices. It does not affect other devices at all,
but it does make the initial login a bit easier on kiosks.
Home Assistant supports unauthenticated access for certain clients using
its _trusted_network_ auth provider. With this configuration, we allow
the desk panel to automatically sign in as the _kiosk_ user, but all
other clients must authenticate normally.
Clients outside the cluster can now communicate with Mosquitto directly
on port 8883 by using its dedicated external IP address. This address
is automatically assigned to the node where Mosquitto is running by
`keepalived`.
This template sensor will be migrated to a helper, since Home Assitant
removed the `forecast` attribute of weather sensors and now requires
calling an action (service) to get those data.
There's obviously a bug or something in `mqttmarionette` because it
occasionally gets "stuck" in a state where it is running but does
not reconnect to the MQTT broker. In such situations, it has to be
restarted (and even then it doesn't shut down correctly but has to
be killed with SIGKILL, usually). I have been doing this manually, but
with this shell script and a corresponding "shell command" integration
in Home Assistant, it can be done automatically. This is similar to
how Home Assistant restarts Mopidy on the living room stereo when it
gets into the same kind of state.
Zigbee2MQTT commits the cardinal sin of storing state in its
configuration file. This means the file has to be writable and thus
stored in persistent storage rather than in a ConfigMap. As a
consequence, making changes to the configuration when the application is
not running is rather difficult. Case in point: when I added the
internal alias for _mqtt.pyrocufflink.blue_ pointing to the in-cluster
service, Zigbee2MQTT became unable to connect to the broker because it
was using the node port instead of the internal port. Since it could
not connect to the broker, it refused to start, and thus the container
would not stay running long enough to fix the configuration to point
to the correct port.
Fortunately, Zigbee2MQTT also allows configuring settings via
environment variables, which can be managed with a ConfigMap. Luckily,
the values read from environment variables override those from the
configuration file, so pointing to the correct broker port with the
environment variable was sufficient to allow the application to start.
Home Assistant uses PostgreSQL for recording the history of entity
states. Since we had been using the in-cluster database server for
this, the data were migrated to the new external PostgreSQL server
automatically when the backup from the former was restored on the
latter. It follows, then, that we can point Home Assistant to the
new server as well.
Home Assistant uses SQLAlchemy, which in turn uses _libpq_ via
_psycopg_, as a client for PostgreSQL. It doesn't expose any
configuration parameters beyond the "database URL" directly, but we
can use the standard environment variables to specify the certificate
and private key for authentication. In fact, the empty `postgresql://`
URL is sufficient, and indicates that _all_ of the connection parameters
should be taken from environment variables. This makes specifying the
parameters for both the `wait-for-db` init container and the main
container take the exact same environment variables, so we can use
YAML anchors to share their definitions.
I've created a _Pool Time_ calendar in Nextcloud that we can use to
mark when people are expected to be in the pool. Using this, we can
configure the "someone is in the pool" alert not to fire during times
when we know people will be in the pool. This will make it much less
annoying on HLC pool days.
The digital photo frame in the kitchen is powered by a server service,
which exposes a minimal HTTP API. Using this API, we can e.g. advance
or backtrack the displayed photo. Exposing `rest_command` services
for these operations allows us to add buttons to dashboards to control
the frame.