draft: projects
|
@ -1 +1,2 @@
|
|||
public/
|
||||
public/
|
||||
static/processed_images
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
+++
|
||||
title = "Projects"
|
||||
sort_by = "title"
|
||||
template = "projects.html"
|
||||
page_template = "project-page.html"
|
||||
+++
|
||||
|
||||
Tinkering is fun, especially when there are tangible results!
|
|
@ -0,0 +1,28 @@
|
|||
+++
|
||||
title = "Basement HUD"
|
||||
page_template = "project-page.html"
|
||||
description = "Wall-mounted dual-monitor heads-up display powered by a network-booted Raspberry Pi CM 4"
|
||||
|
||||
[extra]
|
||||
image = "projects/basementhud/hud-photo01.jpg"
|
||||
+++
|
||||
|
||||
|
||||
{{ resize_image(
|
||||
path="projects/basementhud/hud-photo01.jpg",
|
||||
width=400,
|
||||
height=0,
|
||||
op="fit_width",
|
||||
alt="A photo of two monitors mounted on the wall",
|
||||
title="The HUD on the Wall!"
|
||||
link="hud-photo01.jpg"
|
||||
style="float: right; padding-left: 1em"
|
||||
) }} There are several things I want to keep an eye on throughout the day. I
|
||||
have a couple of Grafana dashboards that I like to have open all the time, but
|
||||
that just seems like a waste of screen real estate!
|
||||
|
||||
Since they basically give away 1080p monitors at Microcenter, I decided it would
|
||||
be fun and interesting to hang a couple of them on the wall. That way, I can
|
||||
see my dashboards all the time, without taking away one of my desktop monitors.
|
||||
|
||||
<div style="clear: both"></div>
|
|
@ -0,0 +1,3 @@
|
|||
+++
|
||||
title = "Hardware"
|
||||
+++
|
After Width: | Height: | Size: 4.6 MiB |
After Width: | Height: | Size: 33 KiB |
|
@ -0,0 +1,164 @@
|
|||
+++
|
||||
title = "Dynamic Cloud Worker Nodes for On-Premises Kubernetes"
|
||||
description = """\
|
||||
Automatically launch EC2 instances as worker nodes in an on-premises Kubernetes
|
||||
cluster when they are needed, and remove them when they are not
|
||||
"""
|
||||
|
||||
[extra]
|
||||
image = "projects/dynk8s/cloudcontainer.jpg"
|
||||
+++
|
||||
|
||||
One of the first things I wanted to do with my Kubernetes cluster at home was
|
||||
start using it for Jenkins jobs. With the [Kubernetes][0] plugin, Jenkins can
|
||||
run create ephemeral Kubernetes pods to use as worker nodes to execute builds.
|
||||
Migrating all of my jobs to use this mechanism would allow me to get rid of the
|
||||
static agents running on VMs and Raspberry Pis.
|
||||
|
||||
Getting the plugin installed and configured was relatively straightforward, and
|
||||
defining pod templates for CI pipelines was simple enough. It did not take
|
||||
long to migrate the majority of the jobs that can run on x86_64 machines. The
|
||||
aarch64, jobs, though, needed some more attention.
|
||||
|
||||
It's no secret that Raspberry Pis are *slow*. They are fine for very light
|
||||
use, or for dedicated single-application purposes, but trying to compile code,
|
||||
especially Rust, on one is a nightmare. So, while I was redoing my Jenkins
|
||||
jobs, I took the opportunity to try to find a better, faster solution.
|
||||
|
||||
Jenkins has an [Amazon EC2][1] plugin, which dynamically launches EC2 instances
|
||||
to execute builds and terminates them when they are no longer needed. We use
|
||||
this plugin at work, and it is a decent solution. I could configure Jenkins to
|
||||
launch Graviton instances to build aarch64 code. Unfortunately, I would either
|
||||
need to pre-create AMIs with all of the necessary build dependencies and run
|
||||
the jobs directly on the worker nodes, or use the [Docker Pipeline][2] plugin
|
||||
to run them in Docker containers. What I really wanted, though, was to be able
|
||||
to use Kubernetes for all of the jobs, so I set out to find a way to
|
||||
dynamically add cloud machines to my local Kubernetes cluster.
|
||||
|
||||
The [Cluster Autoscaler][3] is a component for Kubernetes that integrates with
|
||||
cloud providers to automatically launch and terminate instances in response to
|
||||
demand in the Kubernetes cluster. That is all it does, though; it does not
|
||||
integrate with the Kubernetes API to perform TLS bootstrapping or register the
|
||||
node in the cluster. In the [Autoscaler FAQ][4], it hints at how to handle
|
||||
this limitation, though:
|
||||
|
||||
> Example: If you use `kubeadm` to provision your cluster, it is up to you to
|
||||
> automatically execute `kubeadm join` at boot time via some script.
|
||||
|
||||
With that in mind, I set out to build a solution that uses the Cluster
|
||||
Autoscaler, WireGuard, and `kubeadm` to automatically provision nodes in the
|
||||
cloud to run Jenkins jobs on pods created by the Jenkins Kubernetes plugin.
|
||||
|
||||
[0]: https://plugins.jenkins.io/kubernetes
|
||||
[1]: https://plugins.jenkins.io/ec2
|
||||
[2]: https://plugins.jenkins.io/docker-workflow
|
||||
[3]: https://github.com/kubernetes/autoscaler
|
||||
[4]: https://github.com/kubernetes/autoscaler/blob/de560600991a5039fd9157b0eeeb39ec59247779/cluster-autoscaler/FAQ.md#how-does-scale-up-work
|
||||
|
||||
|
||||
## Process
|
||||
|
||||
<div style="text-align: center;">
|
||||
|
||||
[](sequence.svg)
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
1. When Jenkins starts running a job that is configured to run in a Kubernetes
|
||||
Pod, it uses the job's pod template to create the Pod resource. It also
|
||||
creates a worker node and waits for the JNLP agent in the pod to attach
|
||||
itself to that node.
|
||||
2. Kubernetes attempts to schedule the pod Jenkins created. If there is not a
|
||||
node available, the scheduling fails.
|
||||
3. The Cluster Autoscaler detects that scheduling the pod failed. It checks
|
||||
the requirements for the pod, matches them to an EC2 Autoscaling Group, and
|
||||
determines that scheduling would succeed if it increased the capacity of the
|
||||
group.
|
||||
4. The Cluster Autoscaler increases the desired capacity of the EC2 Autoscaling
|
||||
Group, launching a new EC2 instance.
|
||||
5. Amazon EventBridge sends a notification, via Amazon Simple Notification
|
||||
Service, to the provisioning service, indicating that a new EC2 instance has
|
||||
started.
|
||||
6. The provisioning service generates a `kubeadm` boostrap token for the new
|
||||
instance and stores it as a Secret resource in Kubernetes.
|
||||
7. The provisioning service looks for an available Secret resource in
|
||||
Kubernetes containing WireGuard configuration and marks it as assigned to
|
||||
the new EC2 instance.
|
||||
8. The EC2 instance, via a script executed by *cloud-init*, fetches the
|
||||
WireGuard configuration assigned to it from the provisioning service.
|
||||
9. The provisioning service searches for the Secret resource in Kubernetes
|
||||
containing the WireGuard configuration assigned to the EC2 instance and
|
||||
returns it in the HTTP response.
|
||||
10. The *cloud-init* script on the EC2 instance uses the returned WireGuard
|
||||
configuration to configure a WireGuard interface and connect to the VPN.
|
||||
11. The *cloud-init* script on the EC2 instance generates a
|
||||
[`JoinConfiguration`][7] document with cluster discovery configuration
|
||||
pointing to the provisioning service and passes it to `kubeadm join`.
|
||||
12. The provisioning service looks up the Secret resource in Kubernetes
|
||||
containing the bootstrap token assigned to the EC2 instance and generates a
|
||||
*kubeconfig* file containing the cluster configuration information and that
|
||||
token. The *kubeconfig* file is returned in the HTTP response.
|
||||
13. `kubeadm join`, running on the EC2 instance communicates with the
|
||||
Kubernetes API server, over the WireGuard tunnel, to perform TLS
|
||||
bootstrapping and configure the Kubelet as a worker node in the cluster.
|
||||
14. When the Kubelet on the new EC2 instance is ready, Kubernetes detects that
|
||||
the pod created by Jenkins can now be scheduled to run on it and instructs
|
||||
the Kublet to start the containers in the pod.
|
||||
15. The Kublet on the new EC2 instance starts the pod's containers. The JNLP
|
||||
agent, running as one of the containers in the pod, connects to the Jenkins
|
||||
controller.
|
||||
16. Jenkins assigns the job run to the new agent, which executes the job.
|
||||
|
||||
[7]: https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/#kubeadm-k8s-io-v1beta3-JoinConfiguration
|
||||
|
||||
|
||||
## Components
|
||||
|
||||
### Jenkins Kubernetes Plugin
|
||||
|
||||
The [Kubernetes plugin][0] for Jenkins is responsible for dynamically creating
|
||||
Kubernetes pods from templates associated with pipeline jobs. Jobs provide a
|
||||
pod template that describe the containers and configuration they require in
|
||||
order to run. Jenkins creates the corresponding resources using the Kubernetes
|
||||
API.
|
||||
|
||||
### Autoscaler
|
||||
|
||||
The [Cluster Autoscaler][3] is an optional Kubernetes component that integrates
|
||||
with cloud provider APIs to create or destroy worker nodes. It does not handle
|
||||
any configuration on the machines themselves (i.e. running `kubeadm join`), but
|
||||
it does watch the cluster state and determine when to create or destroy new
|
||||
nodes based on pod requests.
|
||||
|
||||
### cloud-init
|
||||
|
||||
[cloud-init][5] is a tool that comes pre-installed on most cloud machine images
|
||||
(including the official Fedora AMIs) that can be used to automatically
|
||||
provision machines when they are first launched. It can install packages,
|
||||
create configuration files, run commands, etc.
|
||||
|
||||
[5]: https://cloud-init.io/
|
||||
|
||||
### WireGuard
|
||||
|
||||
[WireGuard][6] is a simple and high-performance VPN protocol. It will provide
|
||||
the cloud instances with connectivity back to the private network, and
|
||||
therefore access to internal resources including the Kubernetes API.
|
||||
|
||||
Unfortunately, WireGuard is not particularly amenable to "dynamic" clients
|
||||
(i.e. peers that come and go). This means either custom tooling will be
|
||||
necessary to configure WireGuard peers on the fly OR pre-generating
|
||||
configuration for a set number of peers and ensuring that no more than that
|
||||
number of instances are every online simultaneously.
|
||||
|
||||
[6]: https://www.wireguard.com/
|
||||
|
||||
### Provisioning Service
|
||||
|
||||
This is a custom piece of software that is responsible for provisioning
|
||||
secrets, etc. for the dynamic nodes. Since it will be responsible for handing
|
||||
out WireGuard keys, it will have to be accessible directly over the Internet.
|
||||
It will have to authenticate requests somehow to ensure that they are from
|
||||
authorized clients (i.e. EC2 nodes created by the k8s Autoscaler) before
|
||||
generating any keys/tokens.
|
|
@ -0,0 +1,36 @@
|
|||
@startuml
|
||||
box Internal Network
|
||||
participant Jenkins
|
||||
participant Pod
|
||||
participant Kubernetes
|
||||
participant Autoscaler
|
||||
participant Provisioner
|
||||
Jenkins -> Kubernetes : Create Pod
|
||||
Kubernetes -> Autoscaler : Scale Up
|
||||
end box
|
||||
Autoscaler -> AWS : Launch Instance
|
||||
create "EC2 Instance"
|
||||
AWS -> "EC2 Instance" : Start
|
||||
AWS --> Provisioner : Instance Started
|
||||
Provisioner -> Provisioner : Generate Bootstrap Token
|
||||
Provisioner -> Kubernetes : Store Bootstrap Token
|
||||
Provisioner -> Kubernetes : Allocate WireGuard Config
|
||||
"EC2 Instance" -> Provisioner : Request WireGuard Config
|
||||
Provisioner -> Kubernetes : Request WireGuard Config
|
||||
Kubernetes -> Provisioner : Return WireGuard Config
|
||||
Provisioner -> "EC2 Instance" : Return WireGuard Config
|
||||
"EC2 Instance" -> "EC2 Instance" : Configure WireGuard
|
||||
"EC2 Instance" -> Provisioner : Request Cluster Config
|
||||
Provisioner -> "EC2 Instance" : Return Cluster Config
|
||||
group WireGuard Tunnel
|
||||
"EC2 Instance" -> Kubernetes : Request Certificate
|
||||
Kubernetes -> "EC2 Instance" : Return Certificate
|
||||
"EC2 Instance" -> Kubernetes : Join Cluster
|
||||
Kubernetes -> "EC2 Instance" : Acknowledge Join
|
||||
Kubernetes -> "EC2 Instance" : Schedule Pod
|
||||
"EC2 Instance" -> Kubernetes : Pod Started
|
||||
end
|
||||
Kubernetes -> Jenkins : Pod Started
|
||||
create Pod
|
||||
Jenkins -> Pod : Execute job
|
||||
@enduml
|
After Width: | Height: | Size: 22 KiB |
|
@ -0,0 +1,10 @@
|
|||
+++
|
||||
title = "Home Network"
|
||||
description = """\
|
||||
VM hosts, shared storage, firewall, switches, access points, Raspberry Pis, and
|
||||
more
|
||||
"""
|
||||
|
||||
[extra]
|
||||
image = "projects/home-network/server-rack01.jpg"
|
||||
+++
|
After Width: | Height: | Size: 4.6 MiB |
|
@ -0,0 +1,71 @@
|
|||
+++
|
||||
title = "Home Theatre"
|
||||
page_template = "project-page.html"
|
||||
description = "Big screen TV, surround sound, and powered recliners with LEDs!"
|
||||
|
||||
[extra]
|
||||
image = "projects/theatre/photos/finished/20170314_225410.jpg"
|
||||
+++
|
||||
|
||||
## Specifications
|
||||
|
||||
### Display
|
||||
|
||||
<div style="float: left; padding-right: 1rem; display: table-cell">
|
||||
<img
|
||||
style="border: 4px solid #282828; box-shadow: 0 0 0 1px #e8e8e833"
|
||||
src="res/promos/71WdrKZHHdL._SL1500_.jpg"
|
||||
alt="LG 65EF9500 promotional image"
|
||||
>
|
||||
</div>
|
||||
<div style="display: table-cell">
|
||||
|
||||
**LG 65EF9500**
|
||||
|
||||
* 4K 2160p Ultra-High Definition image
|
||||
* OLED display
|
||||
* High Dynamic Range video
|
||||
|
||||
</div>
|
||||
<div style="clear: both;"></div>
|
||||
|
||||
### Audio/Video Receiver
|
||||
|
||||
<div style="float: left; padding-right: 1rem; display: table-cell">
|
||||
|
||||
**Pioneer Elite SC-LX801**
|
||||
|
||||
* 9.2-channel class D<sup>3</sup> audio amplifier
|
||||
* 140 Watts per channel power output
|
||||
* ESS SABRE<sup>32</sup> Ultra ES9016S Digital–Analog converter
|
||||
* 8x HDMI Source Inputs
|
||||
|
||||
</div>
|
||||
<div style="display: table-cell">
|
||||
<img
|
||||
style="border: 4px solid #282828; box-shadow: 0 0 0 1px #e8e8e833"
|
||||
src="res/promos/81hAfaKzo6L._SL1500_.jpg"
|
||||
alt="Pioneer Elite SC-LX801 promotional image"
|
||||
>
|
||||
</div>
|
||||
|
||||
### Speakers
|
||||
|
||||
<div style="float: left; padding-right: 1rem; display: table-cell">
|
||||
<img
|
||||
style="border: 4px solid #282828; box-shadow: 0 0 0 1px #e8e8e833"
|
||||
src="res/promos/Prime-Bookshelf_additional3_fbfbbd0c-67fd-41ab-971c-813ae3adf846.jpg"
|
||||
alt="SVS Prime Bookshelf promotional image"
|
||||
>
|
||||
</div>
|
||||
<div style="display: table-cell">
|
||||
|
||||
* **Front**: SVS Prime Bookshelf
|
||||
* **Center**: SVS Prime Center
|
||||
* **Surround**: SVS Prime Elevation
|
||||
* **Surround Back**: SVS Prime Satellite
|
||||
* **Height Effects**: SVS Prime Elevation
|
||||
* **Subwoofers**: SVS SB-2000, 12” sealed box with dedicated 500W RMS monoblock amplifiers
|
||||
|
||||
</div>
|
||||
<div style="clear: both;"></div>
|
After Width: | Height: | Size: 9.2 KiB |
|
@ -1,5 +1,6 @@
|
|||
$primary-color: #505050;
|
||||
$primary-color-dark: #282828;
|
||||
$primary-color-darker: #212121;
|
||||
$primary-color-light: #7c7c7c;
|
||||
|
||||
$secondary-color: #333f58;
|
||||
|
@ -9,6 +10,7 @@ $secondary-color-dark: #09192f;
|
|||
$background-color: #121212;
|
||||
$text-color: #e2e2e2;
|
||||
$panel-color: $primary-color-dark;
|
||||
$panel-color-dark: $primary-color-darker;
|
||||
$toolbar-color: $primary-color;
|
||||
|
||||
@font-face {
|
||||
|
@ -341,6 +343,49 @@ article.post .post-date {
|
|||
margin-bottom: 1em;
|
||||
}
|
||||
|
||||
.project-cards {
|
||||
display: flex;
|
||||
justify-content: space-around;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.project-card {
|
||||
width: 100%;
|
||||
background-color: $panel-color-dark;
|
||||
margin: 0.75em;
|
||||
padding: 0 0.75em;
|
||||
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.12), 0 1px 2px rgba(0, 0, 0, 0.24);
|
||||
transition: all 0.3s cubic-bezier(0.25, 0.8, 0.25, 1);
|
||||
}
|
||||
|
||||
@media only screen and (min-width: 600px) {
|
||||
.project-card {
|
||||
width: 45%;
|
||||
}
|
||||
}
|
||||
|
||||
@media only screen and (min-width: 800px) {
|
||||
.project-card {
|
||||
width: 30%;
|
||||
}
|
||||
}
|
||||
|
||||
.project-card:hover {
|
||||
box-shadow: 0 14px 28px rgba(0, 0, 0, 0.25), 0 10px 10px rgba(0, 0, 0, 0.22);
|
||||
}
|
||||
|
||||
.project-card a {
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.project-card h2 {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.project-card img {
|
||||
max-width: 100%;
|
||||
}
|
||||
|
||||
/* CV */
|
||||
|
||||
.cv.panel {
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="24" height="24" viewBox="0 0 24 24"><path d="M22,16V4A2,2 0 0,0 20,2H8A2,2 0 0,0 6,4V16A2,2 0 0,0 8,18H20A2,2 0 0,0 22,16M11,12L13.03,14.71L16,11L20,16H8M2,6V20A2,2 0 0,0 4,22H18V20H4V6" /></svg>
|
After Width: | Height: | Size: 435 B |
After Width: | Height: | Size: 9.0 KiB |
After Width: | Height: | Size: 19 KiB |
After Width: | Height: | Size: 51 KiB |
|
@ -24,11 +24,23 @@ Curriculum Vitae
|
|||
</a>
|
||||
</div>
|
||||
<div class="link">
|
||||
<a href="{{ get_url(path='/projects') }}">
|
||||
{{ load_data(path='static/bug.svg') | safe }}
|
||||
Projects
|
||||
</a>
|
||||
</div>
|
||||
<div class="link">
|
||||
<a href="{{ get_url(path='/blog') }}">
|
||||
{{ load_data(path='static/post.svg') | safe }}
|
||||
Blog
|
||||
</a>
|
||||
</div>
|
||||
<div class="link">
|
||||
<a href="{{ get_url(path='/gallery') }}">
|
||||
{{ load_data(path='static/image.svg') | safe }}
|
||||
Photos
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
{% endblock %}
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
{% extends "base.html" %}
|
||||
|
||||
{% block content %}
|
||||
<article class="post panel">
|
||||
<h1 class="post-title">{{ page.title }}</h1>
|
||||
{{ page.content | safe }}
|
||||
</article>
|
||||
{% endblock %}
|
|
@ -0,0 +1,38 @@
|
|||
{% extends "base.html" %}
|
||||
{% block content %}
|
||||
<article class="post panel">
|
||||
<h1 class="post-title">{{ section.title }}</h1>
|
||||
{{ section.content | safe }}
|
||||
<div class="project-cards">
|
||||
{% for path in section.subsections %}
|
||||
<div class="project-card">
|
||||
{% set sect = get_section(path=path) %}
|
||||
<a href="{{ sect.permalink }}">
|
||||
<h2>{{ sect.title }}</h2>
|
||||
{% if sect.extra.image is defined %}
|
||||
{% set image = resize_image(path=sect.extra.image, width=640, height=480, op="fit") %}
|
||||
<img src="{{ image.url }}" />
|
||||
{% else %}
|
||||
<img src="//picsum.photos/seed/{{ path | slugify }}/320/240" />
|
||||
{% endif %}
|
||||
<p>{{ sect.description }}</p>
|
||||
</a>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% for page in section.pages %}
|
||||
<div class="project-card">
|
||||
<a href="{{ page.permalink }}">
|
||||
<h2>{{ page.title }}</h2>
|
||||
{% if page.extra.image is defined %}
|
||||
{% set image = resize_image(path=page.extra.image, width=640, height=480, op="fit") %}
|
||||
<img src="{{ image.url }}" />
|
||||
{% else %}
|
||||
<img src="//picsum.photos/seed/{{ page.path | slugify }}/320/240" />
|
||||
{% endif %}
|
||||
<p>{{ page.description }}</p>
|
||||
</a>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</article>
|
||||
{% endblock %}
|
|
@ -0,0 +1,7 @@
|
|||
{% extends "base.html" %}
|
||||
{% block content %}
|
||||
<article class="post panel">
|
||||
<h1 class="post-title">{{ section.title }}</h1>
|
||||
{{ section.content | safe }}
|
||||
</article>
|
||||
{% endblock %}
|
|
@ -0,0 +1,10 @@
|
|||
<div>
|
||||
{% for asset in page.assets -%}
|
||||
{%- if asset is matching("[.](jpg|png)$") -%}
|
||||
{% set image = resize_image(path=asset, width=240, height=180) %}
|
||||
<a href="{{ get_url(path=asset) }}" target="_blank">
|
||||
<img src="{{ image.url }}" />
|
||||
</a>
|
||||
{%- endif %}
|
||||
{%- endfor %}
|
||||
</div>
|
|
@ -0,0 +1,9 @@
|
|||
{% set image = resize_image(path=path, width=width, height=height, op=op) %}
|
||||
{% if link is defined %}<a href="{{ link }}">{% endif %}
|
||||
<img src="{{ image.url }}"
|
||||
{% if title is defined %}title="{{ title }}"{% endif %}
|
||||
{% if alt is defined %}alt="{{ alt }}"{% endif %}
|
||||
{% if class is defined %}class="{{ class }}"{% endif %}
|
||||
{% if style is defined %}style="{{ style }}"{% endif %}
|
||||
/>
|
||||
{% if link is defined %}</a>{% endif %}
|